Noise injection into inputs in back-propagation learning

Back-propagation can be considered a nonlinear regression technique, allowing a nonlinear neural network to acquire an input/output (I/O) association using a limited number of samples chosen from a population of input and output patterns. A crucial problem on back-propagation is its generalization capability. A network successfully trained for given samples is not guaranteed to provide desired associations for untrained inputs as well. Concerning this problem some authors showed experimentally that the generalization capability could remarkably be enhanced by training the network with noise injected inputs. The author mathematically explains why and how the noise injection to inputs has such an effect. >