Backpropagation learning algorithms typically collapse the network's structure into a single vector of weight parameters to be optimized. We suggest that their performance may be improved by utilizing the structural information instead of discarding it, and introduce a framework for "tempering" each weight accordingly.
In the tempering model, activation and error signals are treated as approximately independent random variables. The characteristic scale of weight changes is then matched to that of the residuals, allowing structural properties such as a node's fan-in and fan-out to affect the local learning rate and backpropagated error. The model also permits calculation of an upper bound on the global learning rate for batch updates, which in turn leads to different update rules for bias vs. non-bias weights.
This approach yields hitherto unparalleled performance on the family relations benchmark, a deep multi-layer network: for both batch learning with momentum and the delta-bar-delta algorithm, convergence at the optimal learning rate is sped up by more than an order of magnitude.
Geoffrey E. Hinton,et al.
Experiments on Learning by Back Propagation.
A Cost Function for Internal Representations 733 A Cost Function for Internal Representations
Roberto Battiti,et al.
First- and Second-Order Methods for Learning: Between Steepest Descent and Newton's Method
Robert A. Jacobs,et al.
Increased rates of convergence through learning rate adaptation
Randall C. O'Reilly,et al.
Biologically Plausible Error-Driven Learning Using Local Activation Differences: The Generalized Recirculation Algorithm
Yann LeCun,et al.
Second Order Properties of Error Surfaces
Yann LeCun,et al.
Second Order Properties of Error Surfaces: Learning Time and Generalization
Anders Krogh,et al.
A Cost Function for Internal Representations
S. Hyakin,et al.
Neural Networks: A Comprehensive Foundation
Geoffrey E. Hinton.
Learning distributed representations of concepts.