Active Visual Search
Active Visual Search
Object search is the task of efficiently searching for a given 3D object in a given 3D environment by an agent equipped with a camera for target detection and, if the environment configuration is not known, a method of calculating depth, like stereo or laser range finder.
This strategy was tested on a mobile robot, shown here. It is a Cybermotion
Navmaster platform, equipped with Laser-Eye, sonar, video, and infrared sensors.
The recognition algorithm used was quite primitive and the point was to test the planning method, which worked quite well.
A subsequent implementation was attempted, this time on a different platform and with far better recognition algorithms. The platform Ksenia Shubina used was a Pioneer 3 robot, outfitted with a Point Grey Research Bumblebee camera on a Directed Perception pan-tilt unit. The Triclops StereoVision SDK was employed. The papers cited below provide further detail.
Her system was ported to our autonomous wheelchair, Playbot, where it underwent a successful series of tests. In all, the three separate implementations testify to the robustness of the overall strategy.
An example, using the Pioneer platform, appears in the movie below.
A simple target is sought within a room of our lab. The robot knows nothing about the room or its contents other than its exterior walls and size. The robot starts in a position in the middle of the room, facing away form the target. The sequence of images shows the search process. It is recommended that you look at the paper - Shubina, K., Tsotsos, J.K., Visual search for an Object in a 3D Environment using a Mobile Robot, Departmental Technical Report CSE-2008-02, April 28, 2008. - in order to interpret these images.
References
•Ye, Y., Tsotsos, J.K., A Complexity Level Analysis of the Sensor Planning Task for Object Search, Computational Intelligence Vol. 17, No. 4, p. 605 – 620, Nov. 2001.
•Ye, Y., Tsotsos, J.K., Sensor Planning for Object Search, Computer Vision and Image Understanding 73-2, p145 - 168, 1999.
•Ye, Y., Tsotsos, J.K., Knowledge Difference and its Influence on a Search Agent", 1st International Conference on Autonomous Agents, Marina del Ray, CA, February 1997.
•Ye. Y., Tsotsos, J.K., Sensor Planning in 3D Object Search, Int. Symposium on Intelligent Robotic Systems, Lisbon, July 1996.
•Ye, Y. Tsotsos, J.K., 3D Sensor Planning: Its Formulation and Complexity, International Symposium on Artificial Intelligence and Mathematics, January 1996.
•Ye, Y., Tsotsos, J., Where to Look Next in 3D Object Search, IEEE International Symposium on Computer Vision, Coral Gables, Florida, November 1995, p539 - 544.
•Ye, Y., Tsotsos, J.K., The Detection Function in Object Search, The Fourth International Conference for Young Computer Scientists (ICYCS'95). July 19-21,1995 Beijing.
•Shubina, K., Tsotsos, J.K., Visual search for an Object in a 3D Environment using a Mobile Robot, Departmental Technical Report CSE-2008-02, April 28, 2008.
•Tsotsos, J.K.,Shubina, K., Attention and Visual Search : Active Robotic Vision Systems that Search. The 5th International Conference on Computer Vision Systems, Bielefeld, March 21 - 24, 2007.