Reinforcement learning in partially observable mobile robot domains using unsupervised event extraction

This paper describes how learning tasks in partially observable mobile robot domains can be solved by combining reinforcement learning with an unsupervised learning "event extraction" mechanism, called ARAVQ. ARAVQ transforms the robot's continuous, noisy, high-dimensional sensory input stream into a compact sequence of high-level events. The resulting hierarchical control system uses an LSTM recurrent neural network as the reinforcement learning component, which learns high-level actions in response to the history of high-level events. The high-level actions select low-level behaviors which take care of the real-time motor control. Illustrative experiments based on the Khepera mobile robot simulator are presented.