How Good is Fitness Sharing with a Scaling Function

Fitness sharing has been used widely in genetic algorithms for multi-objective function optimization and machine learning. It is often implemented with a scaling function, which adjusts an individual's raw tness to improve the performance of the genetic algorithm. However, choosing a scaling function is an ad hoc a air that lacks su cient theoretical foundation, and which often gives results that need further processing with a hill-climbing algorithm. Although this is already known, an explanation as to why this is so has been lacking. This paper explains why tness sharing with a scaling function performs in this way. We investigate tness sharing's performance at multiobjective optimization, demonstrate the need for a scaling function of some kind, and discuss what form of scaling function works best. An arti cial search space was created for our study. We provide both theoretical and empirical evidence that tness sharing with a scaling function su ers a dilemma which can easily be mistaken for deception. Our theoretical analyses and empirical studies explain why a larger-than-necessary population is needed for tness sharing with a scaling function to work, and give an explanation for common xes such as further processing with a hill-climbing algorithm. Our explanation predicts that annealing the scaling power during a run will improve results, and we verify that it does. University College, The University of New South Wales Page 1