An analysis is made of the sensitivity of feedforward layered networks of Adaline elements (threshold logic units) to weight errors. An approximation is derived which expresses the probability of error for an output neuron of a large network (a network with many neurons per layer) as a function of the percentage change in the weights. As would be expected, the probability of error increases with the number of layers in the network and with the percentage change in the weights. The probability of error is essentially independent of the number of weights per neuron and of the number of neurons per layer, as long as these numbers are large (on the order of 100 or more).
[1]
H. Piaggio.
An Introduction to the Geometry of N Dimensions
,
1930,
Nature.
[2]
Filson Henry Glanz,et al.
Statistical extrapolation in certain adaptive pattern-recognition systems
,
1965
.
[3]
Bernard Widrow,et al.
MADALINE RULE II: a training algorithm for neural networks
,
1988,
IEEE 1988 International Conference on Neural Networks.
[4]
Bernard Widrow,et al.
Adaptive switching circuits
,
1988
.
[5]
Bernard Widrow,et al.
Neural nets for adaptive filtering and adaptive pattern recognition
,
1988,
Computer.