A Monte Carlo optimization technique called “simulated annealing” is a descent algorithm modified by random ascent moves in order to escape local minima which are not global minima. The level of randomization is determined by a control parameter T, called temperature, which tends to zero according to a deterministic “cooling schedule.” We give a simple necessary and sufficient condition on the cooling schedule for the algorithm state to converge in probability to the set of globally minimum cost states. In the special case that the cooling schedule has parametric form Tt = c/log1 + t, the condition for convergence is that c be greater than or equal to the depth, suitably defined, of the deepest local minimum which is not a global minimum state.
T. Kamae,et al.
Stochastic Inequalities on Partially Ordered Spaces
C. D. Gelatt,et al.
Optimization by Simulated Annealing
Donald Geman,et al.
Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images
IEEE Transactions on Pattern Analysis and Machine Intelligence.
Nonstationary Markov chains and convergence of the annealing algorithm
Thermodynamical approach to the traveling salesman problem: An efficient simulation algorithm
S. Geman,et al.
Diffusions for global optimizations
G. B. Smith,et al.
Preface to S. Geman and D. Geman, “Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images”
DIFFUSIONS FOR GLOBAL OPTIMIZATION