Recurrent back-propagation and Newton algorithms for training recurrent neural networks

In this paper the recurrent back-propagation and Newton algorithms for an important class of recurrent networks and their convergence properties are discussed. To ensure proper convergence behavior, recurrent connections must be suitably constrained during the learning process. Simulation results demonstrate that the algorithms with the suggested constraint have superior performance.