A. General Description Rprop stands for 'Resilient backpropagation' and is a local adaptive learning scheme, performing supervised batch learning in multi-layer perceptrons. For a detailed discussion see also 1], 2], 3]. The basic principle of Rprop is to eliminate the harmful innuence of the size of the partial derivative on the weight step. As a consequence, only the sign of the derivative is considered to indicate the direction of the weight update. The size of the weight change is exclusively determined by a weight-speciic, so-called 'update-value' 4 (t) ij : 4w (t) ij = (t) ij ; if @E @wij (t) > 0 +4 (t) ij ; if @E @wij (t) < 0 0 ; else (1) where @E @wij (t) denotes the summed gradient information over all patterns of the pattern set ('batch learning'). It should be noted, that by replacing the 4 ij (t) by a constant update-value 4, equation (1) yields the so-called 'Manhattan'-update rule. The second step of Rprop learning is to determine the new update-values 4 ij (t). This is based on a sign-dependent adaptation process, similar to the learning-rate adaptation in 4], 5].
[1]
Robert A. Jacobs,et al.
Increased rates of convergence through learning rate adaptation
,
1987,
Neural Networks.
[2]
Tom Tollenaere,et al.
SuperSAB: Fast adaptive back propagation with good scaling properties
,
1990,
Neural Networks.
[3]
Martin A. Riedmiller,et al.
A direct adaptive method for faster backpropagation learning: the RPROP algorithm
,
1993,
IEEE International Conference on Neural Networks.
[4]
Martin A. Riedmiller,et al.
Advanced supervised learning in multi-layer perceptrons — From backpropagation to adaptive learning algorithms
,
1994
.