Stopping times and confidence bounds for small-sample stochastic approximation algorithms

The practical application of stochastic approximation methods requires a reliable means to stop the iterative process when the estimate is close to the optimal value or when further improvement of the estimate is doubtful. Conventional ideas on stopping stochastic algorithms employ probabilistic criteria based on the asymptotic distribution of the stochastic approximation process, often with the parameters of that distribution estimated by sequential estimation. Difficulties may arise when this approach is applied to small (finite) samples or when the conditions for asymptotic convergence do not hold. Here is proposed a different approach that uses the notion of a surrogate process as a proxy for the stochastic approximation. A surrogate process is a parameterized stochastic process whose distribution is close to that of the original stochastic approximation process, but whose probabilities are easier to calculate. We show that, given certain conditions, the error in the probability that the estimate is close to the optimal solution is small when computed using a surrogate process proxy distribution. We include a discussion of this approach to stopping stochastic approximation in the context first of a simple example, and then a more realistic simulation optimization, including some empirical results. Surrogate processes contribute to the theory and practice of stopping for stochastic approximation is as follows by providing a systematic approach to obtaining a proxy for the distribution of the estimate qdk (given the appropriate conditions), and by providing a means to stop a process or compute statistics for the estimate qdk without relying on asymptotic performance of the process, including such cases as finite samples, non-converging processes, and constant step size processes.