Incorporation family competition into Gaussian and Cauchy mutations to training neural networks using an evolutionary algorithm

The paper presents an evolutionary technique to train neural networks in tasks requiring learning behavior. Based on family competition principles and adaptive rules, the proposed approach integrates decreasing-based mutations and self-adaptive mutations. Different mutations act global and local strategies separately to balance the trade-off between solution quality and convergence speed. The algorithm proposed herein is applied to two different task domains: Boolean functions and artificial ant problem. Experimental results indicate that in all tested problems, the proposed algorithm performs better than other canonical evolutionary algorithms, such as genetic algorithms, evolution strategies, and evolutionary programming. Moreover, essential components such as mutation operators and adaptive rules in the proposed algorithm are thoroughly analyzed.

[1]  Thomas Bäck,et al.  A Survey of Evolution Strategies , 1991, ICGA.

[2]  L. Darrell Whitley,et al.  Genetic algorithms and neural networks: optimizing connections and connectivity , 1990, Parallel Comput..

[3]  Lawrence Davis,et al.  Training Feedforward Neural Networks Using Genetic Algorithms , 1989, IJCAI.

[4]  Donald E. Waagen,et al.  Evolving recurrent perceptrons for time-series modeling , 1994, IEEE Trans. Neural Networks.

[5]  Ralf Salomon,et al.  Scaling Behavior of the Evolution Strategy when Evolving Neuronal Control Architectures for Autonomous Agents , 1997, Evolutionary Programming.

[6]  J. D. Schaffer,et al.  Combinations of genetic algorithms and neural networks: a survey of the state of the art , 1992, [Proceedings] COGANN-92: International Workshop on Combinations of Genetic Algorithms and Neural Networks.

[7]  Xin Yao,et al.  Fast Evolutionary Programming , 1996, Evolutionary Programming.

[8]  Cheng-Yan Kao,et al.  A Continuous Genetic Algorithm for Global Optimization , 1997, International Conference on Genetic Algorithms.

[9]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[10]  Peter J. Angeline,et al.  An evolutionary algorithm that constructs recurrent neural networks , 1994, IEEE Trans. Neural Networks.

[11]  Kurt Hornik,et al.  Approximation capabilities of multilayer feedforward networks , 1991, Neural Networks.

[12]  Yuval Davidor,et al.  Epistasis Variance: Suitability of a Representation to Genetic Algorithms , 1990, Complex Syst..

[13]  Zbigniew Michalewicz,et al.  Adaptation in evolutionary computation: a survey , 1997, Proceedings of 1997 IEEE International Conference on Evolutionary Computation (ICEC '97).

[14]  John H. Holland,et al.  Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence , 1992 .

[15]  Lalit M. Patnaik,et al.  Adaptive probabilities of crossover and mutation in genetic algorithms , 1994, IEEE Trans. Syst. Man Cybern..

[16]  Richard P. Lippmann,et al.  An introduction to computing with neural nets , 1987 .

[17]  Michael Scholz A Learning Strategy for Neural Networks Based on a Modified Evolutionary Strategy , 1990, PPSN.