A Gaussian potential function network with hierarchically self-organizing learning

Abstract This article presents a design principle of a neural network using Gaussian activation functions, referred to as a Gaussian Potential Function Network (GPFN), and explores the capability of a GPFN in learning a continuous input-output mapping from a given set of teaching patterns. The design principle is highlighted by a Hierarchically Self-Organizing Learning (HSOL) algorithm featuring the automatic recruitment of hidden units under the paradigm of hierarchical learning. A GPFN generates an arbitrary shape of a potential field over the domain of the input space, as an input-output mapping, by synthesizing a number of Gaussian potential functions provided by individual hidden units referred to as Gaussian Potential Function Units (GPFUs). The construction of a GPFN is carried out by the HSOL algorithm which incrementally recruits the minimum necessary number of GPFUs based on the control of the effective radii of individual GPFUs, and trains the locations (mean vectors) and shapes (variances) of individual Gaussian potential functions, as well as their summation weights, based on the Backpropagation algorithm. Simulations were conducted for the demonstration and evaluation of the GPFNs constructed based on the HSOL algorithm for several sets of teaching patterns.

[1]  Geoffrey E. Hinton,et al.  A Learning Algorithm for Boltzmann Machines , 1985, Cogn. Sci..

[2]  R. Lippmann,et al.  An introduction to computing with neural nets , 1987, IEEE ASSP Magazine.

[3]  Richard O. Duda,et al.  Pattern classification and scene analysis , 1974, A Wiley-Interscience publication.

[4]  B. Irie,et al.  Capabilities of three-layered perceptrons , 1988, IEEE 1988 International Conference on Neural Networks.

[5]  J J Hopfield,et al.  Neurons with graded response have collective computational properties like those of two-state neurons. , 1984, Proceedings of the National Academy of Sciences of the United States of America.

[6]  Hecht-Nielsen Theory of the backpropagation neural network , 1989 .

[7]  Sukhan Lee,et al.  Multilayer feedforward potential function network , 1988, IEEE 1988 International Conference on Neural Networks.

[8]  Robert Hecht-Nielsen,et al.  Theory of the backpropagation neural network , 1989, International 1989 Joint Conference on Neural Networks.

[9]  Kunihiko Fukushima,et al.  A neural network for visual pattern recognition , 1988, Computer.

[10]  E. Parzen On Estimation of a Probability Density Function and Mode , 1962 .

[11]  D. Sprecher On the structure of continuous functions of several variables , 1965 .

[12]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[13]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[14]  Ken-ichi Funahashi,et al.  On the approximate realization of continuous mappings by neural networks , 1989, Neural Networks.

[15]  M. Aizerman,et al.  Theoretical Foundations of the Potential Function Method in Pattern Recognition Learning , 1964 .

[16]  Richard P. Lippmann,et al.  An introduction to computing with neural nets , 1987 .

[17]  Teuvo Kohonen,et al.  Self-Organization and Associative Memory , 1988 .

[18]  R. Hecht-Nielsen Counterpropagation networks. , 1987, Applied optics.

[19]  James L. McClelland,et al.  Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .

[20]  B Kosko,et al.  Adaptive bidirectional associative memories. , 1987, Applied optics.