Analysis of Recurrent Backpropagation

This paper attempts a systematic analysis of the recurrent backpropagation (RBP) algorithm, introducing a number of new results. We first show that there is a potential problem in that RBP doesn't necessarily converge to a stable fixed point. This means that the system could backpropagate incorrect error signals and fail to learn properly. We show by experiment and eigenvalue analysis on a small network that this is not the case if the learning rate is chosen to be sufficiently small. On the other hand, standard backpropagation is shown to be more robust to a high learning rate than RBP. Next we examine the advantages of RBP over the standard backpropagation algorithm. RBP is shown to build stable fixed points corresponding to the input patterns. This makes it an appropriate tool for content addressable memory. Finally, we show that the introduction of a non­ local search technique such as simulated annealing has a dramatic effect on a network's ability to learn patterns. This work was funded by NIH grant NS22407