Coevolving Strategies for General Game Playing

The General Game Playing Competition (Genesereth et al., 2005) poses a unique challenge for artificial intelligence. To be successful, a player must learn to play well in a limited number of example games encoded in first-order logic and then generalize its game play to previously unseen games with entirely different rules. Because good opponents are usually not available, learning algorithms must come up with plausible opponent strategies in order to benchmark performance. One approach to simultaneously learning all player strategies is coevolution. This paper presents a coevolutionary approach using neuroevolution of augmenting topologies to evolve populations of game state evaluators. This approach is tested on a sample of games from the General Game Playing Competition and shown to be effective: It allows the algorithm designer to minimize the amount of domain knowledge built into the system, which leads to more general game play and allows modeling opponent strategies efficiently. Furthermore, the general game playing domain proves to be a powerful tool for developing and testing coevolutionary methods

[1]  Gerald Tesauro,et al.  TD-Gammon, a Self-Teaching Backgammon Program, Achieves Master-Level Play , 1994, Neural Computation.

[2]  Susan L. Epstein For the Right Reasons: The FORR Architecture for Learning in a Skill Domain , 1994, Cogn. Sci..

[3]  Peter Stone,et al.  Automatic Heuristic Construction in a Complete General Game Player , 2006, AAAI.

[4]  Donald E. Knuth,et al.  An Analysis of Alpha-Beta Pruning , 1975, Artif. Intell..

[5]  Ted E. Senator,et al.  The Financial Crimes Enforcement Network AI System (FAIS) Identifying Potential Money Laundering from Reports of Large Cash Transactions , 1995, AI Mag..

[6]  Susan L. Epstein,et al.  Learning Game-Specific Spatially-Oriented Heuristics , 1998, Constraints.

[7]  Michael R. Genesereth,et al.  Knowledge Interchange Format , 1991, KR.

[8]  L. V. Valen,et al.  A new evolutionary law , 1973 .

[9]  Edwin D. de Jong,et al.  The MaxSolve algorithm for coevolution , 2005, GECCO '05.

[10]  Jordan B. Pollack,et al.  A Game-Theoretic Memory Mechanism for Coevolution , 2003, GECCO.

[11]  Jonathan Schaeffer,et al.  CHINOOK: The World Man-Machine Checkers Champion , 1996, AI Mag..

[12]  Hod Lipson,et al.  'Managed challenge' alleviates disengagement in co-evolutionary system identification , 2005, GECCO '05.

[13]  Risto Miikkulainen,et al.  Competitive Coevolution through Evolutionary Complexification , 2011, J. Artif. Intell. Res..

[14]  Michael R. Genesereth,et al.  General Game Playing: Overview of the AAAI Competition , 2005, AI Mag..

[15]  Nicholas J. Radcliffe,et al.  Genetic set recombination and its application to neural network topology optimisation , 1993, Neural Computing & Applications.

[16]  Sevan G. Ficici,et al.  Monotonic solution concepts in coevolution , 2005, GECCO '05.

[17]  Edwin D. de Jong,et al.  The Incremental Pareto-Coevolution Archive , 2004, GECCO.

[18]  Murray Campbell,et al.  Deep Blue , 2002, Artif. Intell..

[19]  Risto Miikkulainen,et al.  Continual Coevolution Through Complexification , 2002, GECCO.

[20]  Richard K. Belew,et al.  Coevolutionary search among adversaries , 1997 .

[21]  David E. Goldberg,et al.  Genetic Algorithms with Sharing for Multimodalfunction Optimization , 1987, ICGA.

[22]  Risto Miikkulainen,et al.  Evolving Neural Networks through Augmenting Topologies , 2002, Evolutionary Computation.