Incremental Learning with a Stopping Criterion - Experimental Results

We recently proposed a new incremental procedure for supervised learning with noisy data. Each step consists in adding to the current network a new unit (or small 2- or 3-neuron networks) which is trained to learn the error of the network. The incremental step is repeated until the error of the current network can be considered as a noise. The stopping criterion is very simple and can be directly deduced from a statistical test on the estimated parameters of the new unit. In this paper, we develop experimental comparison between few alternatives of the incremental algorithm and classic backpropagation algorithm, according to convergence, speed of convergence and optimal number of neurons. Experimental results point out the efficacy of this new incremental scheme especially to avoid spurious minima and to design a network with a well-suited size. The number of basic operations is also decreased and gives an average gain on convergence speed of about 20%.