DWINA: Depth and Width Incremental Neural Algorithm

This paper presents DWINA: an algorithm for depth and width design of neural archi-tectures in the case of supervised learning with noisy data. Each new unit is trained to learn the error of the existing network and is connected to it such that it does not aaect its previous performance. Criteria for choosing between increasing width or increasing depth are proposed. The connection procedure for each case is also described. The stopping criterion is very simple and consists in comparing the residual error signal to the noise signal. Preliminary experiments point out the eecacy of the algorithm especially to avoid spurious minima and to design a network with a well-suited size. The complexity of the algorithm (number of operations) is on average the same as that needed in a convergent run of the BP algorithm on a static architecture having the optimal number of parameters. Moreover, it is found that no signiicant diierence exist between networks having the same number of parameters and diierent structure. Finally, the algorithm presents an interesting behaviour since the MSE on the training set tends to decrease continuously during the process evolving directly and surely to the solution of the mapping problem.