The E ect of Representation on Error Surface

Research on neural network learning within the supervised learning paradigm has focused on e cient search (or optimization) over the error surface. Less attention has been given to the e ect representation has on the error surface. One interesting question to ask is, how does the choice of data points a ect learning time for a neural network on linearly separable problems. This paper examines the issue of class representation in the light of its a ect on error surface. Error surface plots visually suggest that an equal representation of points for each class decreases learning time. This hypothesis is supported by simulation results for a simple classi cation problem.