Linear Convergence on Positively Homogeneous Functions of a Comparison Based Step-Size Adaptive Randomized Search: the (1+1) ES with Generalized One-fifth Success Rule

In the context of unconstraint numerical optimization, this paper investigates the global linear convergence of a simple probabilistic derivative-free optimization algorithm (DFO). The algorithm samples a candidate solution from a standard multivariate normal distribution scaled by a step-size and centered in the current solution. This solution is accepted if it has a better objective function value than the current one. Crucial to the algorithm is the adaptation of the step-size that is done in order to maintain a certain probability of success. The algorithm, already proposed in the 60's, is a generalization of the well-known Rechenberg's $(1+1)$ Evolution Strategy (ES) with one-fifth success rule which was also proposed by Devroye under the name compound random search or by Schumer and Steiglitz under the name step-size adaptive random search. In addition to be derivative-free, the algorithm is function-value-free: it exploits the objective function only through comparisons. It belongs to the class of comparison-based step-size adaptive randomized search (CB-SARS). For the convergence analysis, we follow the methodology developed in a companion paper for investigating linear convergence of CB-SARS: by exploiting invariance properties of the algorithm, we turn the study of global linear convergence on scaling-invariant functions into the study of the stability of an underlying normalized Markov chain (MC). We hence prove global linear convergence by studying the stability (irreducibility, recurrence, positivity, geometric ergodicity) of the normalized MC associated to the $(1+1)$-ES. More precisely, we prove that starting from any initial solution and any step-size, linear convergence with probability one and in expectation occurs. Our proof holds on unimodal functions that are the composite of strictly increasing functions by positively homogeneous functions with degree $\alpha$ (assumed also to be continuously differentiable). This function class includes composite of norm functions but also non-quasi convex functions. Because of the composition by a strictly increasing function, it includes non continuous functions. We find that a sufficient condition for global linear convergence is the step-size increase on linear functions, a condition typically satisfied for standard parameter choices. While introduced more than 40 years ago, we provide here the first proof of global linear convergence for the $(1+1)$-ES with generalized one-fifth success rule and the first proof of linear convergence for a CB-SARS on such a class of functions that includes non-quasi convex and non-continuous functions. Our proof also holds on functions where linear convergence of some CB-SARS was previously proven, namely convex-quadratic functions (including the well-know sphere function).

[1]  Anne Auger,et al.  On Proving Linear Convergence of Comparison-based Step-size Adaptive Randomized Search on Scaling-Invariant Functions via Stability of Markov Chains , 2013, ArXiv.

[2]  Nikolaus Hansen,et al.  An Analysis of Mutative -Self-Adaptation on Linear Fitness Functions , 2006, Evolutionary Computation.

[3]  John A. Nelder,et al.  A Simplex Method for Function Minimization , 1965, Comput. J..

[4]  A. Auger Convergence results for the ( 1 , )-SA-ES using the theory of-irreducible Markov chains , 2005 .

[5]  Nikolaus Hansen,et al.  Completely Derandomized Self-Adaptation in Evolution Strategies , 2001, Evolutionary Computation.

[6]  M. Powell Developments of NEWUOA for unconstrained minimization without derivatives , 2007 .

[7]  K. Steiglitz,et al.  Adaptive step size random search , 1968 .

[8]  Jens Jägersküpper,et al.  Algorithmic analysis of a basic evolutionary algorithm for continuous optimization , 2007, Theor. Comput. Sci..

[9]  W. Vent,et al.  Rechenberg, Ingo, Evolutionsstrategie — Optimierung technischer Systeme nach Prinzipien der biologischen Evolution. 170 S. mit 36 Abb. Frommann‐Holzboog‐Verlag. Stuttgart 1973. Broschiert , 1975 .

[10]  Ingo Rechenberg,et al.  Evolutionsstrategie : Optimierung technischer Systeme nach Prinzipien der biologischen Evolution , 1973 .

[11]  Anne Auger,et al.  Empirical comparisons of several derivative free optimization algorithms , 2009 .

[12]  Richard L. Tweedie,et al.  Markov Chains and Stochastic Stability , 1993, Communications and Control Engineering Series.

[13]  Anne Auger,et al.  Convergence results for the (1, lambda)-SA-ES using the theory of phi-irreducible Markov chains , 2005, Theor. Comput. Sci..

[14]  Jens Jägersküpper,et al.  Rigorous Runtime Analysis of the (1+1) ES: 1/5-Rule and Ellipsoidal Fitness Landscapes , 2005, FOGA.

[15]  J. Spall Multivariate stochastic approximation using a simultaneous perturbation gradient approximation , 1992 .

[16]  Jens Jägersküpper,et al.  How the (1+1) ES using isotropic mutations minimizes positive definite quadratic forms , 2006, Theor. Comput. Sci..