Analysis of Decision Boundaries Generated by Constructive Neural Network Learning Algorithms

Constructive learning algorithms o er an approach to incremental construction of near-minimal arti cial neural networks for pattern classi cation. Examples of such algorithms include Tower, Pyramid, Upstart, and Tiling algorithms which construct multilayer networks of threshold logic units (or, multilayer perceptrons). These algorithms di er in terms of the topology of the networks that they construct which in turn biases the search for a decision boundary that correctly classi es the training set. This paper presents an analysis of such algorithms from a geometrical perspective. This analysis helps in a better characterization of the search bias employed by the di erent algorithms in relation to the geometrical distribution of examples in the training set. Simple experiments with non linearly separable training sets support the results of mathematical analysis of such algorithms. This suggests the possibility of designing more e cient constructive algorithms that dynamically choose among di erent biases to build near-minimal networks for pattern classi cation.