Evolving modular neural networks which generalise well

In dealing with complex problems, a monolithic neural network often becomes too large and complex to design and manage. The only practical way is to design modular neural network systems consisting of simple modules. While there has been a lot of work on combining different modules in a modular system in the fields of neural networks, statistics and machine learning, little work has been done on how to design those modules automatically and how to exploit the interaction between individual module design and module combination. This paper proposes an evolutionary approach to designing modular neural networks. The approach addresses the issue of automatic determination of the number of individual modules and the exploitation of the interaction between individual module design and module combination. The relationship among different modules is considered during the module design. This is quite different from the conventional approach where the module design is separated from the module combination. Experimental results on some benchmark problems are presented and discussed in this paper.

[1]  Xin Yao,et al.  Ensemble structure of evolutionary artificial neural networks , 1996, Proceedings of IEEE International Conference on Evolutionary Computation.

[2]  Xin Yao,et al.  A new evolutionary system for evolving artificial neural networks , 1997, IEEE Trans. Neural Networks.

[3]  Lawrence J. Fogel,et al.  Artificial Intelligence through Simulated Evolution , 1966 .

[4]  Xin Yao,et al.  Fast Evolutionary Programming , 1996, Evolutionary Programming.

[5]  Xin Yao,et al.  Automatic modularization by speciation , 1996, Proceedings of IEEE International Conference on Evolutionary Computation.

[6]  X. Yao,et al.  How to Make Best Use of Evolutionary Learning , 1996 .

[7]  Michael I. Jordan,et al.  Task Decomposition Through Competition in a Modular Connectionist Architecture: The What and Where Vision Tasks , 1990, Cogn. Sci..

[8]  David B. Fogel,et al.  Evolutionary Computation: Towards a New Philosophy of Machine Intelligence , 1995 .

[9]  Xin Yao,et al.  A dilemma for fitness sharing with a scaling function , 1995, Proceedings of 1995 IEEE International Conference on Evolutionary Computation.

[10]  Lars Kai Hansen,et al.  Neural Network Ensembles , 1990, IEEE Trans. Pattern Anal. Mach. Intell..

[11]  David B. Fogel,et al.  CONTINUOUS EVOLUTIONARY PROGRAMMING: ANALYSIS AND EXPERIMENTS , 1995 .

[12]  Gilbert Syswerda,et al.  A Study of Reproduction in Generational and Steady State Genetic Algorithms , 1990, FOGA.

[13]  Xin Yao,et al.  An empirical study of genetic operators in genetic algorithms , 1993, Microprocess. Microprogramming.

[14]  Geoffrey E. Hinton,et al.  Adaptive Mixtures of Local Experts , 1991, Neural Computation.

[15]  Xin Yao,et al.  Every Niching Method has its Niche: Fitness Sharing and Implicit Sharing Compared , 1996, PPSN.

[16]  Roberto Battiti,et al.  Democracy in neural nets: Voting schemes for classification , 1994, Neural Networks.

[17]  Torsten Söderström,et al.  Model structure selection for multivariable systems by cross-validation methods , 1987 .

[18]  J. Rissanen,et al.  Modeling By Shortest Data Description* , 1978, Autom..

[19]  H. Akaike A new look at the statistical model identification , 1974 .

[20]  Benjamin W. Wah,et al.  Global Optimization for Neural Network Training , 1996, Computer.

[21]  Dilip Sarkar,et al.  Randomness in generalization ability: a source to improve it , 1996, IEEE Trans. Neural Networks.