On-line Step Size Adaptation Category: Algorithms and Architectures Sub-category: Online Learning Algorithms

Gradient-based methods are often used for optimization. They form the basis of several neural network training algorithms, including backpropagation. They are known to be slow, however. Several techniques exist for the acceleration of gradient-based optimization, but very few of them are applicable to stochastic (or real-time) optimization. This paper proposes a new step size adaptation technique, designed speci cally for accelerating stochastic gradient optimization (and therefore also the real-time training of neural networks). The theoretical basis of the technique is discussed, and an experimental evaluation of the technique's performance is reported.