From pixels to policies: A bootstrapping agent

An embodied agent senses the world at the pixel level through a large number of sense elements. In order to function intelligently, an agent needs high-level concepts, grounded in the pixel level. For human designers to program these concepts and their grounding explicitly is almost certainly intractable, so the agent must learn these foundational concepts autonomously. We describe an approach by which an autonomous learning agent can bootstrap its way from pixel-level interaction with the world, to individuating and tracking objects in the environment, to learning an effective policy for its behavior. We use methods drawn from computational scientific discovery to identify derived variables that support simplified models of the dynamics of the environment. These derived variables are abstracted to discrete qualitative variables, which serve as features for temporal difference learning. Our method bridges the gap between the continuous tracking of objects and the discrete state representation necessary for efficient and effective learning. We demonstrate and evaluate this approach with an agent experiencing a simple simulated world, through a sensory interface consisting of 60,000 time-varying binary variables in a 200 x 300 array, plus a three-valued motor signal and a real-valued reward signal.

[1]  James S. Albus,et al.  I A New Approach to Manipulator Control: The I Cerebellar Model Articulation Controller , 1975 .

[2]  James S. Albus,et al.  New Approach to Manipulator Control: The Cerebellar Model Articulation Controller (CMAC)1 , 1975 .

[3]  Richard S. Sutton,et al.  Neuronlike adaptive elements that can solve difficult learning control problems , 1983, IEEE Transactions on Systems, Man, and Cybernetics.

[4]  H. Simon,et al.  Rediscovering Chemistry with the Bacon System , 1983 .

[5]  Martin Stacey,et al.  Scientific Discovery: Computational Explorations of the Creative Processes , 1988 .

[6]  Z. Pylyshyn The role of location indexes in spatial perception: A sketch of the FINST spatial-index model , 1989, Cognition.

[7]  Benjamin Kuipers,et al.  Qualitative reasoning: Modeling and simulation with incomplete knowledge , 1994, Autom..

[8]  John C. Platt A Resource-Allocating Network for Function Interpolation , 1991, Neural Computation.

[9]  Leslie Pack Kaelbling,et al.  Input Generalization in Delayed Reinforcement Learning: An Algorithm and Performance Comparisons , 1991, IJCAI.

[10]  Usama M. Fayyad,et al.  Multi-Interval Discretization of Continuous-Valued Attributes for Classification Learning , 1993, IJCAI.

[11]  Bernd Fritzke,et al.  A Growing Neural Gas Network Learns Topologies , 1994, NIPS.

[12]  Richard S. Sutton,et al.  Generalization in Reinforcement Learning: Successful Examples Using Sparse Coarse Coding , 1995, NIPS.

[13]  Andrew W. Moore,et al.  Reinforcement Learning: A Survey , 1996, J. Artif. Intell. Res..

[14]  Andrew McCallum,et al.  Reinforcement learning with selective perception and hidden state , 1996 .

[15]  Benjamin Kuipers,et al.  Map Learning with Uninterpreted Sensors and Effectors , 1995, Artif. Intell..

[16]  Sensory Flow Segmentation Using a Resource Allocating Vector Quantizer , 2000, SSPR/SPR.

[17]  Alessandro Saffiotti,et al.  An introduction to the anchoring problem , 2003, Robotics Auton. Syst..

[18]  Wesley E. Snyder,et al.  Machine Vision , 2003 .

[19]  Andrew W. Moore,et al.  The parti-game algorithm for variable resolution reinforcement learning in multidimensional state-spaces , 2004, Machine Learning.

[20]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[21]  Thomas J. Walsh,et al.  Towards a Unified Theory of State Abstraction for MDPs , 2006, AI&M.

[22]  M. Shah,et al.  Object tracking: A survey , 2006, CSUR.

[23]  Chrystopher L. Nehaniv,et al.  From unknown sensors and actuators to actions grounded in sensorimotor perceptions , 2006, Connect. Sci..

[24]  Benjamin Kuipers,et al.  Learning Distinctions and Rules in a Continuous World through Active Exploration , 2007 .

[25]  Benjamin Kuipers,et al.  Autonomous Development of a Grounded Object Ontology by a Learning Robot , 2007, AAAI.