Autonomous Robust Object-level Perception under Uncertainty and Ambiguity

We propose an algorithm for robust visual classification of an object of interest observed from multiple views using a black-box Bayesian classifier which provides a measure of uncertainty, in the presence of significant ambiguity and classifier noise, and of localization error. The fusion of classifier outputs takes into account viewpoint dependency and spatial correlation among observations, as well as pose uncertainty when these observations are taken and a measure of confidence provided by the classifier itself.

Furthermore, we develop a novel approach that infers a distribution over posterior class probabilities within a Bayesian framework, while accounting for model uncertainty. This distribution enables reasoning about uncertainty in the posterior classification, and thus is of prime importance for robust classification and object-level perception in uncertain and ambiguous scenarios, and for safe autonomy in general.

ICRA’18 3-minute spotlight video:

Related Publications: link to bib file