Self-learning neural control of a mobile robot

Reinforcement learning is a promising paradigm for the training of intelligent controllers. The learning capabilities of a neural network based controller architecture are shown by its application to control a mobile robot in an unknown environment. Based on the multi-sensor information provided by four infrared sensors, the controller has to learn to avoid collisions, receiving only a final training signal of success or failure. The article further shows that simulation can be used to avoid the long real world training effort.

[1]  Andrew G. Barto,et al.  Learning to Act Using Real-Time Dynamic Programming , 1995, Artif. Intell..

[2]  R. Riedmiller,et al.  Aspects of learning neural control , 1994, Proceedings of IEEE International Conference on Systems, Man and Cybernetics.

[3]  Ben J. A. Kröse,et al.  Learning from delayed rewards , 1995, Robotics Auton. Syst..

[4]  Martin A. Riedmiller,et al.  Application of sequential reinforcement learning to control dynamic systems , 1996, Proceedings of International Conference on Neural Networks (ICNN'96).

[5]  Martin A. Riedmiller,et al.  A direct adaptive method for faster backpropagation learning: the RPROP algorithm , 1993, IEEE International Conference on Neural Networks.