FDA -A Scalable Evolutionary Algorithm for the Optimization of Additively Decomposed Functions

The Factorized Distribution Algorithm (FDA) is an evolutionary algorithm which combines mutation and recombination by using a distribution. The distribution is estimated from a set of selected points. In general, a discrete distribution defined for n binary variables has 2n parameters. Therefore it is too expensive to compute. For additively decomposed discrete functions (ADFs) there exist algorithms which factor the distribution into conditional and marginal distributions. This factorization is used by FDA. The scaling of FDA is investigated theoretically and numerically. The scaling depends on the ADF structure and the specific assignment of function values. Difficult functions on a chain or a tree structure are solved in about O(nn) operations. More standard genetic algorithms are not able to optimize these functions. FDA is not restricted to exact factorizations. It also works for approximate factorizations as is shown for a circle and a grid structure. By using results from Bayes networks, FDA is extended to LFDA. LFDA computes an approximate factorization using only the data, not the ADF structure. The scaling of LFDA is compared to the scaling of FDA.

[1]  G. Harik Linkage Learning via Probabilistic Modeling in the ECGA , 1999 .

[2]  G. Bortolan,et al.  The problem of linguistic approximation in clinical decision making , 1988, Int. J. Approx. Reason..

[3]  H. Mühlenbein,et al.  From Recombination of Genes to the Estimation of Distributions I. Binary Parameters , 1996, PPSN.

[4]  Gang Wang,et al.  Revisiting the GEMGA: scalable evolutionary optimization through linkage learning , 1998, 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98TH8360).

[5]  G. Schwarz Estimating the Dimension of a Model , 1978 .

[6]  Heinz Mühlenbein,et al.  Schemata, Distributions and Graphical Models in Evolutionary Optimization , 1999, J. Heuristics.

[7]  S. Baluja,et al.  Using Optimal Dependency-Trees for Combinatorial Optimization: Learning the Structure of the Search Space , 1997 .

[8]  Heinz Mühlenbein,et al.  The Equation for Response to Selection and Its Use for Prediction , 1997, Evolutionary Computation.

[9]  D. Goldberg,et al.  BOA: the Bayesian optimization algorithm , 1999 .

[10]  Heinz Mühlenbein,et al.  The Science of Breeding and Its Application to the Breeder Genetic Algorithm (BGA) , 1993, Evolutionary Computation.

[11]  David E. Goldberg,et al.  Genetic Algorithms in Search Optimization and Machine Learning , 1988 .

[12]  Brendan J. Frey,et al.  Graphical Models for Machine Learning and Digital Communication , 1998 .

[13]  Paul A. Viola,et al.  MIMIC: Finding Optima by Estimating Probability Densities , 1996, NIPS.

[14]  M. Pelikán,et al.  The Bivariate Marginal Distribution Algorithm , 1999 .

[15]  Kalyanmoy Deb,et al.  RapidAccurate Optimization of Difficult Problems Using Fast Messy Genetic Algorithms , 1993, ICGA.

[16]  Michael I. Jordan Learning in Graphical Models , 1999, NATO ASI Series.

[17]  Heinz Mühlenbein,et al.  Predictive Models for the Breeder Genetic Algorithm I. Continuous Parameter Optimization , 1993, Evolutionary Computation.

[18]  David E. Goldberg,et al.  SEARCH, Blackbox Optimization, And Sample Complexity , 1996, FOGA.

[19]  Byoung-Tak Zhang,et al.  Evolutionary Induction of Sparse Neural Trees , 1997, Evolutionary Computation.

[20]  A. Robertson A theory of limits in artificial selection , 1960, Proceedings of the Royal Society of London. Series B. Biological Sciences.