This paper attempts a systematic analysis of the recurrent backpropagation (RBP) algorithm, introducing a number of new results. We first show that there is a potential problem in that RBP doesn't necessarily converge to a stable fixed point. This means that the system could backpropagate incorrect error signals and fail to learn properly. We show by experiment and eigenvalue analysis on a small network that this is not the case if the learning rate is chosen to be sufficiently small. On the other hand, standard backpropagation is shown to be more robust to a high learning rate than RBP. Next we examine the advantages of RBP over the standard backpropagation algorithm. RBP is shown to build stable fixed points corresponding to the input patterns. This makes it an appropriate tool for content addressable memory. Finally, we show that the introduction of a non local search technique such as simulated annealing has a dramatic effect on a network's ability to learn patterns. This work was funded by NIH grant NS22407
[1]
A. A. Mullin,et al.
Principles of neurodynamics
,
1962
.
[2]
H. J. Hagger,et al.
An introduction to numerical mathematics
,
1964
.
[3]
D. Rumelhart.
Learning internal representations by back-propagating errors
,
1986
.
[4]
Alan S. Lapedes,et al.
A self-optimizing, nonsymmetrical neural net for content addressable memory and pattern recognition
,
1986
.
[5]
Richard P. Lippmann,et al.
An introduction to computing with neural nets
,
1987
.
[6]
Fernando J. Pineda,et al.
Generalization of Back propagation to Recurrent and Higher Order Neural Networks
,
1987,
NIPS.
[7]
Fernando J. Pineda,et al.
GENERALIZATION OF BACKPROPAGATION TO RECURRENT AND HIGH-ORDER NETWORKS.
,
1987
.