Combining sigmoids and radial basis functions in evolutive neural architectures

An incremental algorithm for supervised learning of noisy data using two layers neural networks with linear output units and a mixture of sigmoids and radial basis functions in the hidden layer (2-S,RBF]NN) is proposed. Each time the network has to be extended, we compare diierent estimations of the residual error: the one provided by a sigmoidal unit responding to the overall input space, and those provided by a number of RBFs responding to localized regions. The unit which provides the best estimation is selected and installed in the existing network. The procedure is repeated until the error reduces to the noise in the data. Experimental results show that the incremental algorithm using 2-S,RBF]NN is considerably faster than the one using only sigmoidal hidden units. It also leads to a less complex nal network and avoids being trapped in spurious minima.