Incremental self-improvement for life-time multi-agent reinforcement learning

Previous approaches to multi-agent reinforcement learning are either very limited or heuris-tic by nature. The main reason is: each agent's or \animat's" environment continually changes because the other learning animats keep changing. Traditional reinforcement learning algorithms cannot properly deal with this. Their convergence theorems require repeatable trials and strong (typically Markovian) assumptions about the environment. In this paper, however, we use a novel, general, sound method for multiple, reinforcement learning \animats", each living a single life with limited computational resources in an unrestricted, changing environment. The method is called \incremental self-improvement" (IS | Schmidhuber, 1994). IS properly takes into account that whatever some animat learns at some point may aaect learning conditions for other animats or for itself at any later point. The learning algorithm of an IS-based animat is embedded in its own policy | the animat cannot only improve its performance, but in principle also improve the way it improves etc. At certain times in the animat's life, IS uses reinforce-ment/time ratios to estimate from a single training example (namely the entire life so far) which previously learned things are still useful, and selectively keeps them but gets rid of those that start appearing harmful. IS is based on an ee-cient, stack-based backtracking procedure which is guaranteed to make each animat's learning history a history of long-term reinforcement accelerations. Experiments demonstrate IS' eeectiveness. In one experiment, IS learns a sequence of more and more complex function approximation problems. In another, a multi-agent system consisting of three co-evolving, IS-based animats chasing each other learns interesting, stochastic predator and prey strategies.

[1]  Pravin Varaiya,et al.  Stochastic Systems: Estimation, Identification, and Adaptive Control , 1986 .

[2]  P. W. Jones,et al.  Bandit Problems, Sequential Allocation of Experiments , 1987 .

[3]  Jürgen Schmidhuber,et al.  Reinforcement Learning in Markovian and Non-Markovian Environments , 1990, NIPS.

[4]  J. Bather,et al.  Multi‐Armed Bandit Allocation Indices , 1990 .

[5]  Stuart J. Russell,et al.  Principles of Metareasoning , 1989, Artif. Intell..

[6]  Steven Douglas Whitehead,et al.  Reinforcement learning for the adaptive control of perception and action , 1992 .

[7]  Andrew McCallum,et al.  Overcoming Incomplete Perception with Utile Distinction Memory , 1993, ICML.

[8]  Mark S. Boddy,et al.  Deliberation Scheduling for Problem Solving in Time-Constrained Environments , 1994, Artif. Intell..

[9]  Juergen Schmidhuber,et al.  On learning how to learn learning strategies , 1994 .

[10]  Michael I. Jordan,et al.  Reinforcement Learning Algorithm for Partially Observable Markov Decision Problems , 1994, NIPS.

[11]  Mark B. Ring Continual learning in reinforcement environments , 1995, GMD-Bericht.

[12]  Corso Elvezia Discovering Solutions with Low Kolmogorov Complexity and High Generalization Capability , 1995 .

[13]  Jürgen Schmidhuber Discovering Solutions with Low Kolmogorov Complexity and High Generalization Capability , 1995, ICML.

[14]  Corso Elvezia,et al.  Environment-independent Reinforcement Acceleration , 1995 .

[15]  Leslie Pack Kaelbling,et al.  Learning Policies for Partially Observable Environments: Scaling Up , 1997, ICML.

[16]  Russell Greiner,et al.  PALO: A Probabilistic Hill-Climbing Algorithm , 1996, Artif. Intell..

[17]  Jürgen Schmidhuber,et al.  Solving POMDPs with Levin Search and EIRA , 1996, ICML.

[18]  Juergen Schmidhuber,et al.  A General Method For Incremental Self-Improvement And Multi-Agent Learning In Unrestricted Environme , 1999 .

[19]  Leslie Pack Kaelbling,et al.  Planning and Acting in Partially Observable Stochastic Domains , 1998, Artif. Intell..

[20]  C A Nelson,et al.  Learning to Learn , 2017, Encyclopedia of Machine Learning and Data Mining.