Constrained Optimization for Neural Map Formation: A Unifying Framework for Weight Growth and Normalization

Computational models of neural map formation can be considered on at least three different levels of abstraction: detailed models including neural activity dynamics, weight dynamics that abstract from the neural activity dynamics by an adiabatic approximation, and constrained optimization from which equations governing weight dynamics can be derived. Constrained optimization uses an objective function, from which a weight growth rule can be derived as a gradient flow, and some constraints, from which normalization rules are derived. In this article, we present an example of how an optimization problem can be derived from detailed nonlinear neural dynamics. A systematic investigation reveals how different weight dynamics introduced previously can be derived from two types of objective function terms and two types of constraints. This includes dynamic link matching as a special case of neural map formation. We focus in particular on the role of coordinate transformations to derive different weight dynamics from the same optimization problem. Several examples illustrate how the constrained optimization framework can help in understanding, generating, and comparing different models of neural map formation. The techniques used in this analysis may also be useful in investigating other types of neural dynamics.

[1]  Terrence J. Sejnowski,et al.  Optimizing Cortical Mappings , 1995, NIPS.

[2]  K. Miller,et al.  Ocular dominance column development: analysis and simulation. , 1989, Science.

[3]  Wolfgang Konen,et al.  A fast dynamic link matching algorithm for invariant pattern recognition , 1994, Neural Networks.

[4]  G J Goodhill,et al.  The influence of neural activity and intracortical connectivity on the periodicity of ocular dominance stripes. , 1998, Network.

[5]  T. Sejnowski,et al.  Storing covariance with nonlinearly interacting neurons , 1977, Journal of mathematical biology.

[6]  David J. C. MacKay,et al.  Analysis of Linsker's Simulations of Hebbian Rules , 1990, Neural Computation.

[7]  Joel L. Davis,et al.  An Introduction to Neural and Electronic Networks , 1995 .

[8]  H. A.F.,et al.  DEVELOPMENT OF RETINOTOPIC PROJECTIONS: AN ANALYTIC TREATMENT , 1983 .

[9]  C. Malsburg,et al.  How patterned neural connections can be set up by self-organization , 1976, Proceedings of the Royal Society of London. Series B. Biological Sciences.

[10]  Joachim M. Buhmann,et al.  Distortion Invariant Object Recognition in the Dynamic Link Architecture , 1993, IEEE Trans. Computers.

[11]  Theo Geisel,et al.  Analysis of ocular dominance pattern formation in a high-dimensional self-organizing-map model , 1997 .

[12]  S. Amari Topographic organization of nerve fields , 1979, Neuroscience Letters.

[13]  C. Malsburg,et al.  How to label nerve cells so that they can interconnect in an ordered fashion. , 1977, Proceedings of the National Academy of Sciences of the United States of America.

[14]  Terrence J. Sejnowski,et al.  An Information-Maximization Approach to Blind Separation and Blind Deconvolution , 1995, Neural Computation.

[15]  Laurenz Wiskott,et al.  Face recognition by dynamic link matching , 1996 .

[16]  Shigeru Tanaka,et al.  Theory of Self-Organization of Cortical Maps , 1988, NIPS.

[17]  Sompolinsky,et al.  Theory of correlations in stochastic neural networks. , 1994, Physical review. E, Statistical physics, plasmas, fluids, and related interdisciplinary topics.

[18]  Kenneth D. Miller,et al.  The Role of Constraints in Hebbian Learning , 1994, Neural Computation.

[19]  Teuvo Kohonen,et al.  The self-organizing map , 1990, Neurocomputing.

[20]  Stanley Goldberg,et al.  The General Theory of Relativity , 1984 .

[21]  N. Swindale A model for the formation of ocular dominance stripes , 1980, Proceedings of the Royal Society of London. Series B. Biological Sciences.

[22]  Kenneth D. Miller,et al.  Derivation of Linear Hebbian Equations from a Nonlinear Hebbian Model of Synaptic Plasticity , 1990, Neural Computation.

[23]  Wolfgang Konen,et al.  Learning to Generalize from Single Examples in the Dynamic Link Architecture , 1993, Neural Computation.

[24]  R Linsker,et al.  From basic network principles to neural architecture: emergence of orientation columns. , 1986, Proceedings of the National Academy of Sciences of the United States of America.

[25]  R. Kempter,et al.  Hebbian learning and spiking neurons , 1999 .

[26]  D. Baylor,et al.  Synchronous bursts of action potentials in ganglion cells of the developing mammalian retina. , 1991, Science.

[27]  J. Cowan,et al.  Specificity and plasticity of retinotectal connections: a computational model , 1981, The Journal of neuroscience : the official journal of the Society for Neuroscience.

[28]  Klaus Schulten,et al.  Models of Orientation and Ocular Dominance Columns in the Visual Cortex: A Critical Comparison , 1995, Neural Computation.

[29]  N. Swindale The development of topography in the visual cortex: a review of models. , 1996, Network.

[30]  Steven J. Nowlan,et al.  Maximum Likelihood Competitive Learning , 1989, NIPS.

[31]  Helge J. Ritter,et al.  Large-scale simulations of self-organizing neural networks on parallel computers: application to biological modelling , 1990, Parallel Comput..

[32]  Shigeru Tanaka,et al.  Theory of self-organization of cortical maps: Mathematical framework , 1990, Neural Networks.

[33]  Christoph von der Malsburg Network Self-Organization in the Ontogenesis of the Mammalian Visual System , .

[34]  Elie Bienenstock,et al.  A neural network for invariant pattern recognition. , 1987 .