Benchmarking Numerical Multiobjective Optimizers Revisited

Algorithm benchmarking plays a vital role in designing new optimization algorithms and in recommending efficient and robust algorithms for practical purposes. So far, two main approaches have been used to compare algorithms in the evolutionary multiobjective optimization (EMO) field: (i) displaying empirical attainment functions and (ii) reporting statistics on quality indicator values. Most of the time, EMO benchmarking studies compare algorithms for fixed and often arbitrary budgets of function evaluations although the algorithms are any-time optimizers. Instead, we propose to transfer and adapt standard benchmarking techniques from the single-objective optimization and classical derivative-free optimization community to the field of EMO. Reporting \emph{target-based runlengths} allows to compare algorithms with varying numbers of function evaluations quantitatively. Displaying data profiles can aggregate performance information over different test functions, problem difficulties, and quality indicators. We apply this approach to compare three common algorithms on a new test function suite derived from the well-known single-objective BBOB functions. The focus thereby lies less on gaining insights into the algorithms but more on showcasing the concepts and on what can be gained over current benchmarking approaches.

[1]  Qingfu Zhang,et al.  Multiobjective optimization Test Instances for the CEC 2009 Special Session and Competition , 2009 .

[2]  Karl Bringmann,et al.  Two-dimensional subset selection for hypervolume and epsilon-indicator , 2014, GECCO.

[3]  Raymond Ros,et al.  Real-Parameter Black-Box Optimization Benchmarking 2009: Experimental Setup , 2009 .

[4]  Kalyanmoy Deb,et al.  A fast and elitist multiobjective genetic algorithm: NSGA-II , 2002, IEEE Trans. Evol. Comput..

[5]  Jing J. Liang,et al.  Problem Definitions for Performance Assessment of Multi-objective Optimization Algorithms , 2007 .

[6]  Anne Auger,et al.  Theory of the hypervolume indicator: optimal μ-distributions and the choice of the reference point , 2009, FOGA '09.

[7]  Carlos M. Fonseca,et al.  Hypervolume Subset Selection in Two Dimensions: Formulations and Algorithms , 2016, Evolutionary Computation.

[8]  Luís N. Vicente,et al.  Direct Multisearch for Multiobjective Optimization , 2011, SIAM J. Optim..

[9]  Thomas Stützle,et al.  Exploratory Analysis of Stochastic Local Search Algorithms in Biobjective Optimization , 2010, Experimental Methods for the Analysis of Optimization Algorithms.

[10]  Lino A. Costa,et al.  A New Hybrid Evolutionary Multiobjective Algorithm Guided by Descent Directions , 2013, J. Math. Model. Algorithms Oper. Res..

[11]  Lino A. Costa,et al.  Many-objective optimization using differential evolution with variable-wise mutation restriction , 2013, GECCO '13.

[12]  Stefan M. Wild,et al.  Benchmarking Derivative-Free Optimization Algorithms , 2009, SIAM J. Optim..

[13]  Marco Laumanns,et al.  Performance assessment of multiobjective optimizers: an analysis and review , 2003, IEEE Trans. Evol. Comput..

[14]  Christian Igel,et al.  Improved step size adaptation for the MO-CMA-ES , 2010, GECCO '10.

[15]  Jorge J. Moré,et al.  Digital Object Identifier (DOI) 10.1007/s101070100263 , 2001 .

[16]  Carlos A. Brizuela,et al.  A survey on multi-objective evolutionary algorithms for many-objective problems , 2014, Comput. Optim. Appl..

[17]  Jouni Lampinen,et al.  Performance assessment of Generalized Differential Evolution 3 with a given set of constrained multi-objective test problems , 2009, 2009 IEEE Congress on Evolutionary Computation.

[18]  Aimin Zhou,et al.  A Multiobjective Evolutionary Algorithm Based on Decomposition and Preselection , 2015, BIC-TA.

[19]  Carlos M. Fonseca,et al.  Inferential Performance Assessment of Stochastic Optimisers and the Attainment Function , 2001, EMO.

[20]  Eckart Zitzler,et al.  Evolutionary algorithms for multiobjective optimization: methods and applications , 1999 .

[21]  Patrick M. Reed,et al.  Diagnostic Assessment of Search Controls and Failure Modes in Many-Objective Evolutionary Optimization , 2012, Evolutionary Computation.

[22]  Anne Auger,et al.  Comparing results of 31 algorithms from the black-box optimization benchmarking BBOB-2009 , 2010, GECCO '10.