Session details: Black box optimization benchmarking 2012 (BBOB 2012)

Welcome to the Black-Box Optimization Benchmarking (BBOB) 2012 workshop! This workshop is a follow up of the BBOB 2009 workshop in Montreal and the BBOB 2010 workshop in Portland. The workshop focuses on benchmarking continuous optimizers, a crucial task to assess the performance of optimizers quantitatively, to understand the weaknesses and strengths of each algorithm and a compulsory path to evaluate new algorithm designs. Because this task is tedious and difficult to realize, we provide the participants with: the choice and implementation of two well-motivated single-objective benchmark function testbeds for noiseless and noisy optimization, the design of an experimental set-up, the generation of data output, and the post-processing and presentation of the results in graphs and tables. Over the years, the BBOB workshop has established a standard for benchmarking continuous optimizers. The code needed to run the experiments and to post-process the data as well as the data collected during the previous editions of the workshop are available on the website http://coco.gforge.inria.fr/doku.php?id=start. We would like to thank all the participants for their contributions and are happy to announce that we have accepted 19 papers. For further news about the workshop please check our webpage, and subscribe to the mailing lists!