A learning algorithm for multilayer feedforward networks, RPROP (resilient propagation), is proposed. To overcome the inherent disadvantages of pure gradient-descent, RPROP performs a local adaptation of the weight-updates according to the behavior of the error function. Contrary to other adaptive techniques, the effect of the RPROP adaptation process is not blurred by the unforeseeable influence of the size of the derivative, but only dependent on the temporal behavior of its sign. This leads to an efficient and transparent adaptation process. The capabilities of RPROP are shown in comparison to other adaptive techniques.<<ETX>>
Wolfram Schiffmann,et al.
Optimization of the Backpropagation Algorithm for Training Multilayer Perceptrons
Tom Tollenaere,et al.
SuperSAB: Fast adaptive back propagation with good scaling properties
Robert A. Jacobs,et al.
Increased rates of convergence through learning rate adaptation
K. Lang,et al.
Learning to tell two spirals apart
Scott E. Fahlman,et al.
An empirical study of learning speed in back-propagation networks