Capabilities of a four-layered feedforward neural network: four layers versus three

Neural-network theorems state that only when there are infinitely many hidden units is a four-layered feedforward neural network equivalent to a three-layered feedforward neural network. In actual applications, however, the use of infinitely many hidden units is impractical. Therefore, studies should focus on the capabilities of a neural network with a finite number of hidden units, In this paper, a proof is given showing that a three-layered feedforward network with N-1 hidden units can give any N input-target relations exactly. Based on results of the proof, a four-layered network is constructed and is found to give any N input-target relations with a negligibly small error using only (N/2)+3 hidden units. This shows that a four-layered feedforward network is superior to a three-layered feedforward network in terms of the number of parameters needed for the training data.

[1]  Ken-ichi Funahashi,et al.  On the approximate realization of continuous mappings by neural networks , 1989, Neural Networks.

[2]  George Cybenko,et al.  Approximation by superpositions of a sigmoidal function , 1992, Math. Control. Signals Syst..

[3]  Panos J. Antsaklis,et al.  A simple method to derive bounds on the size and to train multilayer neural networks , 1991, IEEE Trans. Neural Networks.

[4]  S. Tamura,et al.  Capabilities of a three layer feedforward neural network , 1991, [Proceedings] 1991 IEEE International Joint Conference on Neural Networks.

[5]  Yih-Fang Huang,et al.  Bounds on the number of hidden neurons in multilayer perceptrons , 1991, IEEE Trans. Neural Networks.

[6]  Zoran Obradovic,et al.  Small Depth Polynomial Size Neural Networks , 1990, Neural Computation.

[7]  Jacques de Villiers,et al.  Backpropagation neural nets with one and two hidden layers , 1993, IEEE Trans. Neural Networks.

[8]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.