Comparison of four neural net learning methods for dynamic system identification

Four types of neural net learning rules are discussed for dynamic system identification. It is shown that the feedforward network (FFN) pattern learning rule is a first-order approximation of the FFN-batch learning rule. As a result, pattern learning is valid for nonlinear activation networks provided the learning rate is small. For recurrent types of networks (RecNs), RecN-pattern learning is different from RecN-batch learning. However, the difference can be controlled by using small learning rates. While RecN-batch learning is strict in a mathematical sense, RecN-pattern learning is simple to implement and can be implemented in a real-time manner. Simulation results agree very well with the theorems derived. It is shown by simulation that for system identification problems, recurrent networks are less sensitive to noise.

[1]  Paul J. Werbos,et al.  Applications of advances in nonlinear sensitivity analysis , 1982 .

[2]  K S Narendra,et al.  IDENTIFICATION AND CONTROL OF DYNAMIC SYSTEMS USING NEURAL NETWORKS , 1990 .

[3]  Hecht-Nielsen Theory of the backpropagation neural network , 1989 .

[4]  Kumpati S. Narendra,et al.  Identification and control of dynamical systems using neural networks , 1990, IEEE Trans. Neural Networks.

[5]  PAUL J. WERBOS,et al.  Generalization of backpropagation with application to a recurrent gas market model , 1988, Neural Networks.

[6]  Graham C. Goodwin,et al.  Adaptive filtering prediction and control , 1984 .

[7]  K. Narendra,et al.  An iterative method for the identification of nonlinear systems using a Hammerstein model , 1966 .

[8]  Thomas J. Mc Avoy,et al.  Use of Neural Nets For Dynamic Modeling and Control of Chemical Process Systems , 1989, 1989 American Control Conference.

[9]  Robert Hecht-Nielsen,et al.  Theory of the Backpropagation Neural Network**Based on “nonindent” by Robert Hecht-Nielsen, which appeared in Proceedings of the International Joint Conference on Neural Networks 1, 593–611, June 1989. © 1989 IEEE. , 1992 .

[10]  B. Widrow,et al.  The truck backer-upper: an example of self-learning in neural networks , 1989, International 1989 Joint Conference on Neural Networks.

[11]  J. Farison,et al.  On the Volterra-series functional identification of non-linear discrete-time systems , 1973 .

[12]  Luis B. Almeida Back propagation in non-feedforward networks , 1989 .

[13]  Kumpati S. Narendra,et al.  Adaptive identification and control of dynamical systems using neural networks , 1989, Proceedings of the 28th IEEE Conference on Decision and Control,.

[14]  N.V. Bhat,et al.  Modeling chemical process systems via neural computation , 1990, IEEE Control Systems Magazine.

[15]  Jing Peng,et al.  An Efficient Gradient-Based Algorithm for On-Line Training of Recurrent Network Trajectories , 1990, Neural Computation.

[16]  P. Werbos,et al.  Beyond Regression : "New Tools for Prediction and Analysis in the Behavioral Sciences , 1974 .

[17]  Ronald J. Williams,et al.  A Learning Algorithm for Continually Running Fully Recurrent Neural Networks , 1989, Neural Computation.

[18]  George Cybenko,et al.  Approximation by superpositions of a sigmoidal function , 1989, Math. Control. Signals Syst..

[19]  Stanley J. Farlow,et al.  Self-Organizing Methods in Modeling: Gmdh Type Algorithms , 1984 .

[20]  Stephen A. Billings,et al.  Non-linear system identification using neural networks , 1990 .

[21]  I. J. Leontaritis,et al.  Input-output parametric models for non-linear systems Part II: stochastic non-linear systems , 1985 .

[22]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[23]  Norman P. Herzberg,et al.  Some variations on training of recurrent networks , 1992 .

[24]  Barak A. Pearlmutter Learning State Space Trajectories in Recurrent Neural Networks , 1989, Neural Computation.