The problem of learning using connectionist networks, in which network connection strengths are modified systematically so that the response of the network increasingly approximates the desired response can be structured as an optimization problem. The widely used back propagation method of connectionist learning [19, 21, 18] is set in the context of nonlinear optimization. In this framework, the issues of stability, convergence and parallelism are considered. As a form of gradient descent with fixed step size, back propagation is known to be unstable, which is illustrated using Rosenbrock's function. This is contrasted with stable methods which involve a line search in the gradient direction. The convergence criterion for connectionist problems involving binary functions is discussed relative to the behavior of gradient descent in the vicinity of local minima. A minimax criterion is compared with the least squares criterion. The contribution of the momentum term [19, 18] to more rapid convergence is interpreted relative to the geometry of the weight space. It is shown that in plateau regions of relatively constant gradient, the momentum term acts to increase the step size by a factor of 1/1-μ, where μ is the momentum term. In valley regions with steep sides, the momentum constant acts to focus the search direction toward the local minimum by averaging oscillations in the gradient. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-88-62. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/597 LEARNING ALGORITHMS FOR CONNECTIONIST NETWORKS: APPLIED GRADIENT METHODS OF NONLINEAR OPTIMIZATION
Joseph Henry Wegstein,et al.
Accelerating convergence of iterative processes
Roger Fletcher,et al.
A Rapidly Convergent Descent Method for Minimization
John A. Nelder,et al.
A Simplex Method for Function Minimization
On a numerical instability of Davidon-like methods
J. D. Pearson.
ON VARIABLE METRIC METHODS OF MINIMIZATION
R. Fletcher,et al.
A New Approach to Variable Metric Algorithms
C. G. Broyden.
The Convergence of a Class of Double-rank Minimization Algorithms 2. The New Algorithm
S. Vajda,et al.
Numerical Methods for Non-Linear Optimization
Kumpati S. Narendra,et al.
Adaptation and learning in automatic systems
D. J. Bell,et al.
Numerical Methods for Unconstrained Optimization
T. M. Williams,et al.
Practical Methods of Optimization. Vol. 1: Unconstrained Optimization
J J Hopfield,et al.
Neural networks and physical systems with emergent collective computational abilities.
Proceedings of the National Academy of Sciences of the United States of America.
Geoffrey E. Hinton,et al.
Learning internal representations by error propagation
Charles W. Anderson,et al.
Learning and problem-solving with multilayer connectionist systems (adaptive, strategy learning, neural networks, reinforcement learning)
Geoffrey E. Hinton,et al.
Experiments on Learning by Back Propagation.
Lokendra Shastri,et al.
Learning Phonetic Features Using Connectionist Networks
Practical Methods of Optimization
D Zipser,et al.
Learning the hidden structure of speech.
The Journal of the Acoustical Society of America.