Binary rule encoding schemes: a study using the compact classifier system

Several binary rule encoding schemes have been proposed for Pittsburgh-style classifier systems. This paper focus on the analysis of how rule encoding may bias the scalability of learning maximally general and accurate rules by classifier systems. The theoretical analysis of maximally general and accurate rules using two different binary rule encoding schemes showed some theoretical results with clear implications to the scalability of any genetic-based machine learning system that uses the studied encoding schemes. Such results are clearly relevant since one of the binary representations studied is widely used on Pittsburgh-style classifier systems, and shows an exponential shrink of the useful rules available as the problem size increases.

[1]  Heinz Mühlenbein,et al.  The Equation for Response to Selection and Its Use for Prediction , 1997, Evolutionary Computation.

[2]  Martin V. Butz,et al.  Extracted global structure makes local building block processing effective in XCS , 2005, GECCO '05.

[3]  Kenneth A. De Jong,et al.  Learning Concept Classification Rules Using Genetic Algorithms , 1991, IJCAI.

[4]  David E. Goldberg,et al.  Genetic Algorithms in Search Optimization and Machine Learning , 1988 .

[5]  Pedro Larrañaga,et al.  Estimation of Distribution Algorithms , 2002, Genetic Algorithms and Evolutionary Computation.

[6]  Cezary Z. Janikow,et al.  A knowledge-intensive genetic algorithm for supervised learning , 1993, Machine Learning.

[7]  David E. Goldberg,et al.  The compact genetic algorithm , 1999, IEEE Trans. Evol. Comput..

[8]  John H. Holland,et al.  Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence , 1992 .

[9]  J. A. Lozano,et al.  Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation , 2001 .

[10]  David E. Goldberg,et al.  The Gambler's Ruin Problem, Genetic Algorithms, and the Sizing of Populations , 1999, Evolutionary Computation.

[11]  Stewart W. Wilson Classifier Fitness Based on Accuracy , 1995, Evolutionary Computation.

[12]  Xavier Llorà,et al.  Knowledge-independent data mining with fine-grained parallel evolutionary algorithms , 2001 .

[13]  Shumeet Baluja,et al.  A Method for Integrating Genetic Search Based Function Optimization and Competitive Learning , 1994 .

[14]  Luca Lanzi Pier,et al.  Extending the Representation of Classifier Conditions Part II: From Messy Coding to S-Expressions , 1999 .

[15]  P. Lanzi Extending the representation of classifier conditions part I: from binary to messy coding , 1999 .

[16]  David E. Goldberg,et al.  A Survey of Optimization by Building and Using Probabilistic Models , 2002, Comput. Optim. Appl..

[17]  Rich Caruana,et al.  Removing the Genetics from the Standard Genetic Algorithm , 1995, ICML.

[18]  Kalyanmoy Deb,et al.  Messy Genetic Algorithms: Motivation, Analysis, and First Results , 1989, Complex Syst..

[19]  Stewart W. Wilson Get Real! XCS with Continuous-Valued Inputs , 1999, Learning Classifier Systems.

[20]  Martin V. Butz,et al.  Automated Global Structure Extraction for Effective Local Building Block Processing in XCS , 2006, Evolutionary Computation.

[21]  H. Mühlenbein,et al.  From Recombination of Genes to the Estimation of Distributions I. Binary Parameters , 1996, PPSN.

[22]  John H. Holland,et al.  Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence , 1992 .

[23]  Xavier Llorà,et al.  The compact classifier system: motivation, analysis, and first results , 2005, GECCO '05.