We study generalization in a simple framework of feedforward linear networks with n inputs and n outputs, trained from examples by gradient descent on the usual quadratic error function. We derive analytical results on the behavior of the validation function corresponding to the LMS error function calculated on a set of validation patterns. We show that the behavior of the validation function depends critically on the initial conditions and on the characteristics of the noise. Under certain simple assumptions, if the initial weights are sufficiently small, the validation function has a unique minimum corresponding to an optimal stopping time for training for which simple bounds can be calculated. There exists also situations where the validation function can have more complicated and somewhat unexpected behavior such as multiple local minima (at most n) of variable depth and long but finite plateau effects. Additional results and possible extensions are briefly discussed.
[1]
David Haussler,et al.
What Size Net Gives Valid Generalization?
,
1989,
Neural Computation.
[2]
Kurt Hornik,et al.
Neural networks and principal component analysis: Learning from examples without local minima
,
1989,
Neural Networks.
[3]
Naftali Tishby,et al.
Consistent inference of probabilities in layered networks: predictions and generalizations
,
1989,
International 1989 Joint Conference on Neural Networks.
[4]
Sompolinsky,et al.
Learning from examples in large neural networks.
,
1990,
Physical review letters.
[5]
Yves Chauvin.
Generalization Dynamics in LMS Trained Linear Networks
,
1990,
NIPS.
[6]
D. Zipser,et al.
Identification models of the nervous system
,
1992,
Neuroscience.