Efficient estimation of dynamically optimal learning rate and momentum for backpropagation learning

This paper considers efficient estimation of dynamically optimal learning rate (LR) and momentum factor (MF) for backpropagation learning by a multilayer feedforward neural net. A novel approach exploiting the derivatives w.r.t. the LR and MF is presented, which does not need to explicitly compute the first two order derivatives in weight space, but rather makes use of the information gathered from the forward and backward procedures. Since the computational and storage burden for estimating the optimal LR and MF at most triple that of the standard backpropagation algorithm (BPA), the backpropagation learning procedure can be therefore accelerated with remarkable savings in running time. Computer simulations provided in this paper indicate that at least a magnitude of savings in running time can be achieved using the present approach.