Self-organizing maps for visually guided collision-free navigation

A control system for navigation and obstacle avoidance of a visually guided mobile robot is proposed. The robot uses self-organizing principles to adapt its actions to the visual information, minimizing the need of human guidance during learning and operating. The robot is able to adjust itself to new situations and unknown environments, because no a priori knowledge about the locations or shapes of the obstacles are needed; only local camera information and a direction to the goal are required. A simulated robot is used to demonstrate the efficiency of this approach.

[1]  Jouko Lampinen,et al.  Feature extractor giving distortion invariant hierarchical feature space , 1991, Defense + Commercial Sensing.

[2]  Dean Pomerleau,et al.  ALVINN, an autonomous land vehicle in a neural network , 2015 .

[3]  Sridhar Mahadevan,et al.  Automatic Programming of Behavior-Based Robots Using Reinforcement Learning , 1991, Artif. Intell..

[4]  Leslie Pack Kaelbling,et al.  Foundations of learning in autonomous agents , 1991, Robotics Auton. Syst..

[5]  Allen M. Waxman,et al.  Visual learning, adaptive expectations, and behavioral conditioning of the mobile robot MAVIN , 1991, Neural Networks.

[6]  John G. Daugman,et al.  Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression , 1988, IEEE Trans. Acoust. Speech Signal Process..

[7]  S. P. Luttrell,et al.  Self-organisation: a derivation from first principles of a class of learning algorithms , 1989, International 1989 Joint Conference on Neural Networks.

[8]  Alberto Del Bimbo,et al.  Dynamic neural estimation for autonomous vehicles driving , 1992, Proceedings., 11th IAPR International Conference on Pattern Recognition. Vol.II. Conference B: Pattern Recognition Methodology and Systems.