Fast and Accurate Environment Modelling using Omnidirectional Vision

This paper describes an algorithm to detect obstacles and landmarks, using the omnidirectional vision system of a RoboCup robot, to build an internal representation of the robot’s environment. The restriction to pixels corresponding to an equally spaced grid on the floor around the robot and a biologically inspired fault-tolerant colour segmentation of this grid result in a fast and robust detection. The performance of the environment modelling concerning computation time and accuracy is addressed by comparing experimental results to object positions given by an absolute positioning system.

[1]  J. Little,et al.  Inverse perspective mapping simplifies optical flow computation and obstacle detection , 2004, Biological Cybernetics.

[2]  M S Waterman,et al.  Identification of common molecular subsequences. , 1981, Journal of molecular biology.

[3]  Tom Duckett,et al.  An absolute positioning system for 100 euros , 2003, 1st International Workshop on Robotic Sensing, 2003. ROSE' 03..

[4]  Manuela M. Veloso,et al.  Fast and inexpensive color image segmentation for interactive robots , 2000, Proceedings. 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2000) (Cat. No.00CH37113).

[5]  Andrea Bonarini,et al.  An omnidirectional vision sensor for fast tracking for mobile robots , 1999, IMTC/99. Proceedings of the 16th IEEE Instrumentation and Measurement Technology Conference (Cat. No.99CH36309).