Creating intelligent agents through shaping of coevolution

Creating agents that behave in complex and believable ways in video games and virtual environments is a difficult task. One solution, shaping, has worked well in evolution of neural networks for agent control in relatively straightforward environments such as the NERO video game, but is very labor-intensive. Another solution, coevolution, promises to establish shaping automatically, but it is difficult to control. Although these two approaches have been used separately in the past, they are compatible in principle. This paper shows how shaping can be applied to coevolution to guide it towards more effective behaviors, thus enhancing the power of coevolution in competitive environments. Several automated shaping methods, based on manipulating the fitness function and the game rules, are introduced and tested in a “capture-the-flag”-like environment, where the controller networks for two populations of agents are evolved using the rtNEAT neuroevolution method. Each of these shaping methods as well as their combinations are superior to a control, i.e. direct evolution without shaping. They are effective in different and sometimes incompatible ways, suggesting that different methods may work best in different environments. Using shaping, it should thus be possible to employ coevolution to create intelligent agents for a variety of games.

[1]  Xin Yao,et al.  Automatic modularization by speciation , 1996, Proceedings of IEEE International Conference on Evolutionary Computation.

[2]  Risto Miikkulainen,et al.  Competitive Coevolution through Evolutionary Complexification , 2011, J. Artif. Intell. Res..

[3]  Jordan B. Pollack,et al.  Pareto Optimality in Coevolutionary Learning , 2001, ECAL.

[4]  Jordan B. Pollack,et al.  Coevolution of a Backgammon Player , 1996 .

[5]  Tony Savage,et al.  Shaping: The Link Between Rats and Robots , 1998, Connect. Sci..

[6]  Risto Miikkulainen,et al.  OpenNERO: A Game Platform for AI Research and Education , 2008, AIIDE.

[7]  Kenneth O. Stanley and Joseph Reisinger and Risto Miikkulainen,et al.  The Dominance Tournament Method of Monitoring Progress in Coevolution , 2002 .

[8]  Richard K. Belew,et al.  New Methods for Competitive Coevolution , 1997, Evolutionary Computation.

[9]  Risto Miikkulainen,et al.  Constructing competitive and cooperative agent behavior using coevolution , 2010, CIG.

[10]  Risto Miikkulainen,et al.  Incremental Evolution of Complex General Behavior , 1997, Adapt. Behav..

[11]  Risto Miikkulainen,et al.  Real-time Learning in the NERO Video Game , 2005, AIIDE.

[12]  Risto Miikkulainen,et al.  Real-time neuroevolution in the NERO video game , 2005, IEEE Transactions on Evolutionary Computation.

[13]  Edwin D. de Jong,et al.  Ideal Evaluation from Coevolution , 2004, Evolutionary Computation.

[14]  Peter Stone,et al.  Combining manual feedback with subsequent MDP reward signals for reinforcement learning , 2010, AAMAS.

[15]  Risto Miikkulainen,et al.  Continual Coevolution Through Complexification , 2002, GECCO.

[16]  J. Krebs,et al.  Arms races between and within species , 1979, Proceedings of the Royal Society of London. Series B. Biological Sciences.

[17]  Kenneth O. Stanley and Bobby D. Bryant and Risto Miikkulainen,et al.  The NERO Real-time Video Game , 2004 .

[18]  Richard K. Belew,et al.  Coevolutionary search among adversaries , 1997 .

[19]  Risto Miikkulainen,et al.  Real-Time Evolution of Neural Networks in the NERO Video Game , 2006, AAAI.

[20]  Risto Miikkulainen,et al.  Evolving agent behavior in multiobjective domains using fitness-based shaping , 2010, GECCO '10.