The "Moving Targets" Training Algorithm

A simple method for training the dynamical behavior of a neural network is derived. It is applicable to any training problem in discrete-time networks with arbitrary feedback. The algorithm resembles back-propagation in that an error function is minimized using a gradient-based method, but the optimization is carried out in the hidden part of state space either instead of, or in addition to weight space. Computational results are presented for some simple dynamical training problems, one of which requires response to a signal 100 time steps in the past.

[1]  B. Lev Energy models and studies , 1983 .

[2]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[3]  Anthony J. Robinson,et al.  Static and Dynamic Error Propagation Networks with Application to Speech Coding , 1987, NIPS.

[4]  William H. Press,et al.  Numerical Recipes in FORTRAN - The Art of Scientific Computing, 2nd Edition , 1987 .

[5]  W. Press,et al.  Numerical Recipes: The Art of Scientific Computing , 1987 .

[6]  R. Fletcher Practical Methods of Optimization , 1988 .

[7]  Fernando J. Pineda,et al.  Dynamics and architecture for neural computation , 1988, J. Complex..

[8]  Eytan Domany,et al.  Learning by Choice of Internal Representations , 1988, Complex Syst..

[9]  Barak A. Pearlmutter Learning State Space Trajectories in Recurrent Neural Networks , 1989, Neural Computation.

[10]  Tal Grossman The CHIR Algorithm: A Generalization for Multiple-Output and Multilayered Networks , 1989, Complex Syst..

[11]  Anders Krogh,et al.  A Cost Function for Internal Representations , 1989, NIPS.

[12]  Ronald J. Williams,et al.  Experimental Analysis of the Real-time Recurrent Learning Algorithm , 1989 .

[13]  K. Birmiwal,et al.  A new gradient-free learning algorithm , 1989, International 1989 Joint Conference on Neural Networks.

[14]  F. A. Seiler,et al.  Numerical Recipes in C: The Art of Scientific Computing , 1989 .

[15]  Luis B. Almeida Back propagation in non-feedforward networks , 1989 .

[16]  Ronald J. Williams,et al.  A Learning Algorithm for Continually Running Fully Recurrent Neural Networks , 1989, Neural Computation.

[17]  William H. Press,et al.  Numerical recipes , 1990 .

[18]  Raymond L. Watrous,et al.  Connected recognition with a recurrent network , 1990, Speech Commun..

[19]  Raymond L. Watrous Phoneme Discrimination Using Connectionist Networks , 1990, Machine Learning: From Theory to Applications.