Training Feedforward Networks with the Marquardt Algorithm
暂无分享,去创建一个
[1] John B. Shoven,et al. I , Edinburgh Medical and Surgical Journal.
[2] D. Marquardt. An Algorithm for Least-Squares Estimation of Nonlinear Parameters , 1963 .
[3] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[4] Robert A. Jacobs,et al. Increased rates of convergence through learning rate adaptation , 1987, Neural Networks.
[5] Sharad Singhal,et al. Training Multilayer Perceptrons with the Extende Kalman Algorithm , 1988, NIPS.
[6] Jorge Nocedal,et al. On the limited memory BFGS method for large scale optimization , 1989, Math. Program..
[7] Stefanos Kollias,et al. An adaptive least squares algorithm for the efficient training of artificial neural networks , 1989 .
[8] C. Charalambous,et al. Conjugate gradient algorithm for efficient training of artifi-cial neural networks , 1990 .
[9] Tom Tollenaere,et al. SuperSAB: Fast adaptive back propagation with good scaling properties , 1990, Neural Networks.
[10] David F. Shanno,et al. Recent advances in numerical techniques for large scale optimization , 1990 .
[11] Bernard Widrow,et al. Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights , 1990, 1990 IJCNN International Joint Conference on Neural Networks.
[12] Lee A. Feldkamp,et al. Decoupled extended Kalman filter training of feedforward layered networks , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.
[13] Thomas P. Vogl,et al. Rescaling of variables in back propagation learning , 1991, Neural Networks.
[14] Etienne Barnard,et al. Optimization for training neural nets , 1992, IEEE Trans. Neural Networks.
[15] Roberto Battiti,et al. First- and Second-Order Methods for Learning: Between Steepest Descent and Newton's Method , 1992, Neural Computation.