Efficient estimation of dynamically optimal learning rate and momentum for backpropagation learning
暂无分享,去创建一个
[1] Shixin Cheng,et al. Dynamic learning rate optimization of the backpropagation algorithm , 1995, IEEE Trans. Neural Networks.
[2] Yann LeCun,et al. Improving the convergence of back-propagation learning with second-order methods , 1989 .
[3] Mohamed Mohandes,et al. Two adaptive stepsize rules for gradient descent and their application to the training of feedforward artificial neural networks , 1994, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94).
[4] S. Hyakin,et al. Neural Networks: A Comprehensive Foundation , 1994 .
[5] Roberto Battiti,et al. First- and Second-Order Methods for Learning: Between Steepest Descent and Newton's Method , 1992, Neural Computation.
[6] Robert A. Jacobs,et al. Increased rates of convergence through learning rate adaptation , 1987, Neural Networks.