header

Using Multi-View Recognition and Meta-data Annotation to Guide a Robot's Attention


Alexander Thomas, Vittorio Ferrari, Bastian Leibe, Tinne Tuytelaars, Luc Van Gool
International Journal of Robotics Research, Vol. 28, No. 8, pp. 976-998, 2009.
pubimg

In the transition from industrial to service robotics, robots will have to deal with increasingly unpredictable and variable environments. We present a system that is able to recognize objects of a certain class in an image and to identify their parts for potential interactions. The method can recognize objects from arbitrary viewpoints and generalizes to instances that have never been observed during training, even if they are partially occluded and appear against cluttered backgrounds. Our approach builds on the Implicit Shape Model of Leibe et al. (2008). We extend it to couple recognition to the provision of meta-data useful for a task and to the case of multiple viewpoints by integrating it with the dense multi-view correspondence finder of Ferrari et al. (2006). Meta-data can be part labels but also depth estimates, information on material types, or any other pixelwise annotation. We present experimental results on wheelchairs, cars, and motorbikes.



Disclaimer Home Visual Computing institute RWTH Aachen University