Generalization capabilities of minimal kernel-based networks

The authors discuss and analyze a class of kernal-based networks which are based on the Parzen classifier. Although this class of networks has the highly desirable feature of consistency, it has very high computational demands. Therefore, a novel method and an existing method to minimize the size of these networks while preserving the classification performance are discussed. The authors present a theorem that explicitly states the relation between various parameters in the network design procedure and the confidence that one can have in the classification performance of the minimized network. The methods presented facilitate a powerful reduction of the network size and are essentially independent of the probability distributions.<<ETX>>