Novelty-based restarts for evolution strategies

A major limitation in applying evolution strategies to black box optimization is the possibility of convergence into bad local optima. Many techniques address this problem, mostly through restarting the search. However, deciding the new start location is nontrivial since neither a good location nor a good scale for sampling a random restart position are known. A black box search algorithm can nonetheless obtain some information about this location and scale from past exploration. The method proposed here makes explicit use of such experience, through the construction of an archive of novel solutions during the run. Upon convergence, the most “novel” individual found so far is used to position the new start in the least explored region of the search space, actively looking for a new basin of attraction. We demonstrate the working principle of the method on two multi-modal test problems.

[1]  Ingo Rechenberg,et al.  Evolutionsstrategie : Optimierung technischer Systeme nach Prinzipien der biologischen Evolution , 1973 .

[2]  Ralph R. Martin,et al.  A Sequential Niche Technique for Multimodal Function Optimization , 1993, Evolutionary Computation.

[3]  Alex S. Fukunaga,et al.  Restart Scheduling for Genetic Algorithms , 1998, PPSN.

[4]  David Corne,et al.  The Pareto archived evolution strategy: a new baseline algorithm for Pareto multiobjective optimisation , 1999, Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406).

[5]  Nikolaus Hansen,et al.  Completely Derandomized Self-Adaptation in Evolution Strategies , 2001, Evolutionary Computation.

[6]  A. E. Eiben,et al.  Introduction to Evolutionary Computing , 2003, Natural Computing Series.

[7]  Hans-Paul Schwefel,et al.  Evolution strategies – A comprehensive introduction , 2002, Natural Computing.

[8]  Nikolaus Hansen,et al.  A restart CMA evolution strategy with increasing population size , 2005, 2005 IEEE Congress on Evolutionary Computation.

[9]  E. Alba,et al.  Metaheuristic Procedures for Training Neutral Networks , 2006 .

[10]  Arthur C. Sanderson,et al.  JADE: Self-adaptive differential evolution with fast and reliable convergence performance , 2007, 2007 IEEE Congress on Evolutionary Computation.

[11]  Tom Schaul,et al.  Natural Evolution Strategies , 2008, 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence).

[12]  Kenneth O. Stanley,et al.  Exploiting Open-Endedness to Solve Problems Through the Search for Novelty , 2008, ALIFE.

[13]  Tom Schaul,et al.  Efficient natural evolution strategies , 2009, GECCO.

[14]  Tom Schaul,et al.  Stochastic search using the natural gradient , 2009, ICML '09.

[15]  Tom Schaul,et al.  Exponential natural evolution strategies , 2010, GECCO '10.

[16]  Jing Xiao,et al.  P-ADE: Self-adaptive differential evolution with fast and reliable convergence performance , 2010, 2010 The 2nd International Conference on Industrial Mechatronics and Automation.

[17]  Kenneth O. Stanley,et al.  Efficiently evolving programs through the search for novelty , 2010, GECCO '10.

[18]  Tom Schaul,et al.  A Natural Evolution Strategy for Multi-objective Optimization , 2010, PPSN.

[19]  Kenneth O. Stanley,et al.  Abandoning Objectives: Evolution Through the Search for Novelty Alone , 2011, Evolutionary Computation.

[20]  Faustino J. Gomez,et al.  When Novelty Is Not Enough , 2011, EvoApplications.

[21]  Tom Schaul,et al.  High dimensions and heavy tails for natural evolution strategies , 2011, GECCO '11.