Hybrid methods using genetic algorithms for global optimization

This paper discusses the trade-off between accuracy, reliability and computing time in global optimization. Particular compromises provided by traditional methods (Quasi-Newton and Nelder-Mead's simplex methods) and genetic algorithms are addressed and illustrated by a particular application in the field of nonlinear system identification. Subsequently, new hybrid methods are designed, combining principles from genetic algorithms and "hill-climbing" methods in order to find a better compromise to the trade-off. Inspired by biology and especially by the manner in which living beings adapt themselves to their environment, these hybrid methods involve two interwoven levels of optimization, namely evolution (genetic algorithms) and individual learning (Quasi-Newton), which cooperate in a global process of optimization. One of these hybrid methods appears to join the group of state-of-the-art global optimization methods: it combines the reliability properties of the genetic algorithms with the accuracy of Quasi-Newton method, while requiring a computation time only slightly higher than the latter.

[1]  Guy Albert Dumont,et al.  System identification and control using genetic algorithms , 1992, IEEE Trans. Syst. Man Cybern..

[2]  R. Dickinson,et al.  A physical model of the bidirectional reflectance of vegetation canopies: 2. Inversion and validation , 1990 .

[3]  P. Gill,et al.  Quasi-Newton Methods for Unconstrained Optimization , 1972 .

[4]  Alexander H. G. Rinnooy Kan,et al.  Stochastic methods for global optimization , 1984 .

[5]  Richard K. Belew,et al.  Evolving networks: using the genetic algorithm with connectionist learning , 1990 .

[6]  David E. Goldberg,et al.  Genetic Algorithms and Walsh Functions: Part II, Deception and Its Analysis , 1989, Complex Syst..

[7]  Zbigniew Michalewicz,et al.  Handling Constraints in Genetic Algorithms , 1991, ICGA.

[8]  B. Pinty,et al.  A physical model of the bidirectional reflectance of vegetation canopies , 1990 .

[9]  Gordon S. G. Beveridge,et al.  Optimization: theory and practice , 1970 .

[10]  D. E. Goldberg,et al.  Genetic Algorithms in Search , 1989 .

[11]  John A. Nelder,et al.  A Simplex Method for Function Minimization , 1965, Comput. J..

[12]  David E. Goldberg,et al.  Genetic Algorithms in Search Optimization and Machine Learning , 1988 .

[13]  Dan Boneh,et al.  On genetic algorithms , 1995, COLT '95.

[14]  Lawrence. Davis,et al.  Handbook Of Genetic Algorithms , 1990 .

[15]  John J. Grefenstette,et al.  Lamarckian Learning in Multi-Agent Environments , 1991, ICGA.

[16]  José Carlos Príncipe,et al.  A Simulated Annealing Like Convergence Theory for the Simple Genetic Algorithm , 1991, ICGA.

[17]  Zbigniew Michalewicz,et al.  An Experimental Comparison of Binary and Floating Point Representations in Genetic Algorithms , 1991, ICGA.

[18]  J.-M. Renders,et al.  Biological learning metaphors for adaptive process control: a general strategy , 1992, Proceedings of the 1992 IEEE International Symposium on Intelligent Control.

[19]  Alexander H. G. Rinnooy Kan,et al.  Concurrent stochastic methods for global optimization , 1990, Math. Program..