On the role of modularity in evolutionary dynamic optimisation

The field of evolutionary dynamic optimisation is concerned with the application of evolutionary algorithms to dynamic optimisation problems. In recent years, numerous new algorithms have been proposed to track the problem's potentially moving global optimum as closely as possible. A large proportion of these techniques attempts to exploit possible similarities between successive problem instances, primarily using previously found solutions as starting points for future instances: If the previous global optimum is in close proximity to the new global optimum (in the genotype space), such transfer of knowledge should allow the algorithm to locate the new global optimum in less time than a random restart may require. However, it is clear that distance alone may be insufficient to guarantee such computational savings. In this paper, we propose a simple framework that may be used to create bi-modular problems with a variable degree of epistasis. We subsequently study how the dependencies between the two modules may affect the difficulty (number of function evaluation required) of relocating the new global optimum. We find that, given a simple (1+1) EA, even a modest degree of linkage between the problem's otherwise independent modules may have a significant impact on these attributes.

[1]  Peter A. N. Bosman,et al.  Learning, anticipation and time-deception in evolutionary online dynamic optimization , 2005, GECCO '05.

[2]  U. Alon,et al.  Spontaneous evolution of modularity and network motifs. , 2005, Proceedings of the National Academy of Sciences of the United States of America.

[3]  Shengxiang Yang,et al.  Non-stationary problem optimization using the primal-dual genetic algorithm , 2003, The 2003 Congress on Evolutionary Computation, 2003. CEC '03..

[4]  Jürgen Branke,et al.  Evolutionary Optimization in Dynamic Environments , 2001, Genetic Algorithms and Evolutionary Computation.

[5]  David H. Wolpert,et al.  No free lunch theorems for optimization , 1997, IEEE Trans. Evol. Comput..

[6]  Thomas Jansen,et al.  Optimization with randomized search heuristics - the (A)NFL theorem, realistic scenarios, and difficult functions , 2002, Theor. Comput. Sci..

[7]  Stefan Droste,et al.  Design and Management of Complex Technical Processes and Systems by Means of Computational Intelligence Methods Analysis of the (1+1) Ea for a Dynamically Changing Objective Function Analysis of the (1+1) Ea for a Dynamically Changing Objective Function , 2022 .

[8]  L. Altenberg B2.7.2 NK Fitness Landscapes , 1997 .

[9]  Terry Jones,et al.  Fitness Distance Correlation as a Measure of Problem Difficulty for Genetic Algorithms , 1995, ICGA.

[10]  Riccardo Poli,et al.  Information landscapes , 2005, GECCO '05.

[11]  Ronald W. Morrison,et al.  Designing Evolutionary Algorithms for Dynamic Environments , 2004, Natural Computing Series.

[12]  T. P. Hutchinson,et al.  REPRINTS , 1987, The Lancet.

[13]  Ingo Wegener,et al.  Methods for the Analysis of Evolutionary Algorithms on Pseudo-Boolean Functions , 2003 .

[14]  Xin Yao,et al.  Attributes of Dynamic Combinatorial Optimisation , 2008, SEAL.

[15]  Jürgen Branke,et al.  Evolutionary optimization in uncertain environments-a survey , 2005, IEEE Transactions on Evolutionary Computation.

[16]  Von der Fakult Evolutionary Algorithms and Dynamic Optimization Problems , 2003 .

[17]  Shengxiang Yang,et al.  Memory-based immigrants for genetic algorithms in dynamic environments , 2005, GECCO '05.

[18]  Karsten Weicker,et al.  Evolutionary algorithms and dynamic optimization problems , 2003 .

[19]  Uri Alon,et al.  Varying environments can speed up evolution , 2007, Proceedings of the National Academy of Sciences.

[20]  Stuart A. Kauffman,et al.  ORIGINS OF ORDER , 2019, Origins of Order.