Research on neural network learning within the supervised learning paradigm has focused on e cient search (or optimization) over the error surface. Less attention has been given to the e ect representation has on the error surface. One interesting question to ask is, how does the choice of data points a ect learning time for a neural network on linearly separable problems. This paper examines the issue of class representation in the light of its a ect on error surface. Error surface plots visually suggest that an equal representation of points for each class decreases learning time. This hypothesis is supported by simulation results for a simple classi cation problem.
[1]
Geoffrey E. Hinton,et al.
Learning internal representations by error propagation
,
1986
.
[2]
Bernardo A. Huberman,et al.
AN IMPROVED THREE LAYER, BACK PROPAGATION ALGORITHM
,
1987
.
[3]
Eric B. Baum,et al.
Supervised Learning of Probability Distributions by Neural Networks
,
1987,
NIPS.
[4]
Scott E. Fahlman,et al.
An empirical study of learning speed in back-propagation networks
,
1988
.
[5]
Kishan G. Mehrotra,et al.
Bounds on the number of samples needed for neural learning
,
1991,
IEEE Trans. Neural Networks.
[6]
Paul W. Munro,et al.
Repeat Until Bored: A Pattern Selection Strategy
,
1991,
NIPS.
[7]
Training Hidden Units: the Generalized Delta Rule
,
.