Multi-Problem Parameter Tuning using BONESA

We introduce a parameter tuning method designed to tune an evolutionary algorithm (EA) on multiple problems in one go. Our method has two special features. It is based on a ‘double loop’ scheme, consisting of an intertwined searching and learning loop. The searching loop is seeking EA parameter vectors with a high utility, while the learning loop creates a model to predict the utility of parameter vectors. This reduces computational efforts by using the model instead of performing expensive EA runs. Furthermore, our method uses Gaussian filtering and significance testing to establish Pareto dominance in the presence of noisy data. We demonstrate and validate this approach with experiments on an artificial utility landscape. The results show that our method can estimate the utility values of parameters very well and collects much useful information on EA robustness. 1 Background and objectives In recent years, there has been a growing interest in automated parameter tuning methods in Evolutionary Computing [8]. New tuning algorithms, such as SPO [2–4], REVAC [18, 21], and Meta-ES [26], F-RACE [5], ParamILS [14] have been developed and their success has been demonstrated in several case studies. These methods can focus the search in the most promising areas of the parameter space and can find good parameter values for the evolutionary algorithm (EA) to be tuned. In this paper we argue and demonstrate that tuning algorithms can do more than finding good parameter values. This stance is based on noting that tuning algorithms are in essence search algorithms that generate much data while traversing the space of parameter vectors. This data contains information about all visited parameter vectors and their utility values, that is, the performance of the EA using the given parameter vector. Often this data is lost, because one is only interested in some good parameter values. However, this data can be stored and utilized for more general purposes. To this end, we adopt the view of Hooker [12] distinguishing competitive and scientific testing, and observe that parameter tuning can be used for – configuring an evolutionary algorithm by choosing parameter values that optimize its performance, and – analyzing an evolutionary algorithm by studying how its performance depends on its parameter values. Furthermore, let us also note that the behavior of an EA depends on three factors: its parameter values, the problem instance it is solving, and random effects. Thus, the above list can be extended by: 0 0.2 0.4 0.6 0.8 1 0.5 0.6 0.7 0.8 0.9 Parameter Value Problem Pe rfo rm an ce Fig. 1. Illustration of the grand performance landscape showing the performance (z) of EA instances belonging to a given parameter vector (x) on a given problem instance (y). Note: The “cloud” of repeated runs is not shown. – analyzing an evolutionary algorithm by studying how its performance depends on the problems (problem instances) it is solving, and – analyzing an evolutionary algorithm by studying how its performance varies when executing multiple independent repetitions of its run, It is easy to see that a detailed understanding of algorithm behavior has a great practical relevance. Knowing the effects of problem characteristics and algorithm characteristics on algorithm behavior, users can make well-informed design decisions regarding the (evolutionary) algorithm they want to use. Figure 1 shows the kind of information that can be gathered if tuning data is kept and integrated into what we call a grand performance landscape. For the sake of this illustration, we restrict ourselves to a single EA parameter. Thus, we obtain a 3D landscape with axis x representing the values of the parameter and axis y representing the different problems investigated.1 (In the general case of n parameters, we have n + 1 axes here.) The third dimension z shows the performance of the EA instance belonging to a given parameter vector on a given problem instance. It should be noted that for stochastic algorithms, such as EAs, this landscape is blurry if the repetitions with different random seeds are also taken into account. That is, rather than one z-value for a pair 〈x, y〉, we have one z for every run, for repeated runs we get a “cloud”. The left-hand-side of Figure 2 shows 2D slices of the grand performance landscape corresponding to specific parameter vectors. This provides information on robustness to changes in problem specification. Such data are often reported in the EC literature, be it with a different presentation. A frequently used option is to show a table containing the experimental outcomes (performance results) of one or more EA instances on a 1 We specifically refer to them as being different problems, since we do not want to imply that these belong to the same type of problem, e.g., multiple TSP or 3SAT instances, that can be solved with similar parameter values. In general, to get the best insights into the effect of parameter values, the test-suite should contain a diverse set of problems. Fig. 2. Illustration of parameter-wise slices (left) and problem-wise slices (right) of the grand utility landscape shown in Figure 1. (The “cloud” of repeated runs is not shown.) predefined test suite, e.g., the five DeJong functions, the 25 functions of the CEC 2005 contest, etc. The right-hand-side of Figure 2 shows 2D slices corresponding to specific problem instances. On such a slice we see how the performance of the given EA depends on the parameter vectors it uses. This discloses information regarding robustness to changes in parameter values. Such data is hardly ever published in evolutionary computing, mainly because they are not even generated: parameter values are mostly selected by conventions, ad hoc choices, and very limited experimental comparisons. One of the main messages of this paper is that by the increased adoption of tuning algorithms this practice could change and knowledge about EA parameterization could be collected and disseminated. It changes the question of ‘which parameter-values should I choose?’ to ‘when should I choose which parameter values?’. However, the parameter-tuning problem is characterized by two specific issues that makes the realization this vision of generating knowledge on EA behavior on the large scale very difficult: – The cost of testing a single parameter vector p is high. Namely, to establish its performance, the given EA must be executed using the values in p and this can be very time consuming. – The presence of noise. Noise is caused by the stochastic nature of EAs, implying that different runs with p can (and will) deliver different results. This mandates that a parameter vector is tested repeatedly, thus amplifying the cost issue possibly by orders of magnitude (depending on the number of required repetitions). The main contributions of this paper can be listed as follows: – We introduce a multi-problem parameter tuning approach, called Bonesa that can deal with the specific issues of parameter tuning – We validate this method by experiments on an artificial utility landscape.

[1]  Welch Bl THE GENERALIZATION OF ‘STUDENT'S’ PROBLEM WHEN SEVERAL DIFFERENT POPULATION VARLANCES ARE INVOLVED , 1947 .

[2]  R. Shepard,et al.  Toward a universal law of generalization for psychological science. , 1987, Science.

[3]  N. Schaumberger Generalization , 1989, Whitehead and Philosophy of Education.

[4]  John N. Hooker,et al.  Testing heuristics: We have it all wrong , 1995, J. Heuristics.

[5]  Andy J. Keane,et al.  Metamodeling Techniques For Evolutionary Optimization of Computationally Expensive Problems: Promises and Limitations , 1999, GECCO.

[6]  George C. Runger,et al.  Using Experimental Design to Find Effective Parameter Settings for Heuristics , 2001, J. Heuristics.

[7]  Marco Laumanns,et al.  SPEA2: Improving the strength pareto evolutionary algorithm , 2001 .

[8]  Evan J. Hughes,et al.  Evolutionary Multi-objective Ranking with Uncertainty and Noise , 2001, EMO.

[9]  Thomas Bartz-Beielstein,et al.  Analysis of Particle Swarm Optimization Using Computational Statistics , 2004 .

[10]  Thomas Bartz-Beielstein,et al.  Tuning search algorithms for real-world applications: a regression tree based approach , 2004, Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No.04TH8753).

[11]  Andrew W. Moore,et al.  The Racing Algorithm: Model Selection for Lazy Learners , 1997, Artificial Intelligence Review.

[12]  Thomas Bartz-Beielstein,et al.  Sequential parameter optimization , 2005, 2005 IEEE Congress on Evolutionary Computation.

[13]  Yaochu Jin,et al.  A comprehensive survey of fitness approximation in evolutionary computation , 2005, Soft Comput..

[14]  Kalyanmoy Deb,et al.  Searching for Robust Pareto-Optimal Solutions in Multi-objective Optimization , 2005, EMO.

[15]  Jonathan E. Fieldsend,et al.  Multi-objective optimisation in the presence of uncertainty , 2005, 2005 IEEE Congress on Evolutionary Computation.

[16]  A. E. Eiben,et al.  Efficient relevance estimation and value calibration of evolutionary algorithm parameters , 2007, 2007 IEEE Congress on Evolutionary Computation.

[17]  Marcus Gallagher,et al.  Combining Meta-EAs and Racing for Difficult EA Parameter Tuning Tasks , 2007, Parameter Setting in Evolutionary Algorithms.

[18]  Thomas Stützle,et al.  Automatic Algorithm Configuration Based on Local Search , 2007, AAAI.

[19]  F. Hutter,et al.  ParamILS: an automatic algorithm configuration framework , 2009 .

[20]  Hamidreza Eskandari,et al.  Evolutionary multiobjective optimization in noisy problem environments , 2009, J. Heuristics.

[21]  A. E. Eiben,et al.  Comparing parameter tuning methods for evolutionary algorithms , 2009, 2009 IEEE Congress on Evolutionary Computation.

[22]  Heike Trautmann,et al.  New Uncertainty Handling Strategies in Multi-objective Evolutionary Optimization , 2010, PPSN.

[23]  A. E. Eiben,et al.  Parameter Tuning of Evolutionary Algorithms: Generalist vs. Specialist , 2010, EvoApplications.

[24]  A. E. Eiben,et al.  Parameter tuning for configuring and analyzing evolutionary algorithms , 2011, Swarm Evol. Comput..

[25]  Jacek M. Zurada,et al.  Swarm and Evolutionary Computation , 2012, Lecture Notes in Computer Science.