Recurrent Backpropagation and the Dynamical Approach to Adaptive Neural Computation

Error backpropagation in feedforward neural network models is a popular learning algorithm that has its roots in nonlinear estimation and optimization. It is being used routinely to calculate error gradients in nonlinear systems with hundreds of thousands of parameters. However, the classical architecture for backpropagation has severe restrictions. The extension of backpropagation to networks with recurrent connections will be reviewed. It is now possible to efficiently compute the error gradients for networks that have temporal dynamics, which opens applications to a host of problems in systems identification and control.

[1]  K. Abromeit Music Received , 2023, Notes.

[2]  J. D. Cowan,et al.  Statistical Mechanics of Nervous Nets , 1968 .

[3]  J. Meditch,et al.  Applied optimal control , 1972, IEEE Transactions on Automatic Control.

[4]  P. Werbos,et al.  Beyond Regression : "New Tools for Prediction and Analysis in the Behavioral Sciences , 1974 .

[5]  D Marr,et al.  Cooperative computation of stereo disparity. , 1976, Science.

[6]  J J Hopfield,et al.  Neural networks and physical systems with emergent collective computational abilities. , 1982, Proceedings of the National Academy of Sciences of the United States of America.

[7]  J J Hopfield,et al.  Neurons with graded response have collective computational properties like those of two-state neurons. , 1984, Proceedings of the National Academy of Sciences of the United States of America.

[8]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[9]  Pineda,et al.  Generalization of back-propagation to recurrent neural networks. , 1987, Physical review letters.

[10]  Gerald Tesauro,et al.  Scaling Relationships in Back-Propagation Learning: Dependence on Training Set Size , 1987, Complex Syst..

[11]  Fernando J. Pineda,et al.  GENERALIZATION OF BACKPROPAGATION TO RECURRENT AND HIGH-ORDER NETWORKS. , 1987 .

[12]  Patrice Y. Simard,et al.  Analysis of Recurrent Backpropagation , 1988 .

[13]  Allon Guez,et al.  On the stability, storage capacity, and design of nonlinear continuous neural networks , 1988, IEEE Trans. Syst. Man Cybern..

[14]  Fernando J. Pineda,et al.  Dynamics and architecture for neural computation , 1988, J. Complex..

[15]  A. A. Abidi,et al.  An Analog CMOS Backward Error-propagation LSI , 1988, Twenty-Second Asilomar Conference on Signals, Systems and Computers.

[16]  M. Zak Terminal attractors for addressable memory in neural networks , 1988 .

[17]  Barak A. Pearlmutter Learning State Space Trajectories in Recurrent Neural Networks , 1988, Neural Computation.

[18]  Terrence J. Sejnowski,et al.  Learning to Solve Random-Dot Stereograms of Dense and Transparent Surfaces with Recurrent Backpropagation , 1989 .

[19]  Ronald J. Williams,et al.  A Learning Algorithm for Continually Running Fully Recurrent Neural Networks , 1989, Neural Computation.

[20]  J. Barhen,et al.  'Chaotic relaxation' in concurrently asynchronous neurodynamics , 1989, International 1989 Joint Conference on Neural Networks.

[21]  L. B. Almeida A learning rule for asynchronous perceptrons with feedback in a combinatorial environment , 1990 .

[22]  G.D. Wilensky,et al.  Scaling of back-propagation training time to large dimensions , 1990, 1990 IJCNN International Joint Conference on Neural Networks.