The performance of feedforward neural networks in real applications can often be improved significantly if use is made of a priori information. For interpolation problems this prior knowledge frequently includes smoothness requirements on the network mapping, and can be imposed by the addition to the error function of suitable regularization terms. The new error function, however, now depends on the derivatives of the network mapping, and so the standard backpropagation algorithm cannot be applied. In this letter, we derive a computationally efficient learning algorithm, for a feedforward network of arbitrary topology, which can be used to minimize such error functions. Networks having a single hidden layer, for which the learning algorithm simplifies, are treated as a special case.
[1]
A. N. Tikhonov,et al.
Solutions of ill-posed problems
,
1977
.
[2]
F. Girosi,et al.
Networks for approximation and learning
,
1990,
Proc. IEEE.
[3]
Chris Bishop,et al.
Improving the Generalization Properties of Radial Basis Function Neural Networks
,
1991,
Neural Computation.
[4]
Jack L. Meador,et al.
Encoding a priori information in feedforward networks
,
1991,
Neural Networks.
[5]
Chris Bishop,et al.
Current address: Microsoft Research,
,
2022
.
[6]
C. M. Bishop,et al.
Curvature-Driven Smoothing in Backpropagation Neural Networks
,
1992
.