Convergence and divergence in neural networks: Processing of chaos and biological analogy

We have used simple neural networks as models to examine two interrelated biological questions: What are the functional implications of the converging and diverging projections that profusely interconnect neurons? How do the dynamical features of the input signal affect the responses of such networks? In this paper we examine subsets of these questions by using error-back propagation learning as the network response in question. The dynamics of the input signals was suggested by our previous biological findings. These signals consisted of chaotic series generated by the recursive logistic equation, x"n"+"1 = 3.95(1 - x"n)X"n, random noise, and sine functions. The input signals were also sent to a variety of teacher functions that controlled the type of computations networks were required to do, Single and double hidden-layer networks were used to examine, respectively, divergence and a combination of divergence and convergence. Networks containing single and multiple input/output units were used to determine how the networks learned when they were required to perform single or multiple tasks on their input signals. Back propagation was performed ''on-line'' in each training trial, and all processing was analog. Thereafter, the network units were examined ''neurophysiologically'' by selectively removing individual synapses to determine their effect on system error. The findings show that the dynamics of input signals strongly affect the learning process. Chaotic point processes, analogous to spike trains in biological systems, provide excellent signals on which networks can perform a variety of computational tasks. Continuous functions that vary within bounds, whether chaotic or not, impose some limitations. Differences in convergence and divergence determine the relative strength of the trained network connections. Many weak synapses, and even some of the strongest ones, are multifunctional in that they have approximately equal effects in all learned tasks, as has been observed biologically. Training sets all synapses to optimal levels, and many units are automatically given task-specific assignments. But despite their optimal settings, many synapses produce relatively weak effects, particularly in networks that combine convergence and divergence within the same layer. Such findings of ''lazy'' synapses suggest a re-examination of the role of weak synapses in biological systems. Of equal biological importance is the finding that networks containing only trainable synapses are severely limited computationally unless trainable thresholds are included. Network capabilities are also severely limited by relatively small increases in the number of network units. Some of these findings are immediately addressable from the code of the back propagation algorithm itself. Others, such as limitations imposed by increasing network size, need to be viewed through error surfaces generated by the trial-to-trial connection changes that occur during learning. We discuss the biological implications of the findings.

[1]  Ken-ichi Funahashi,et al.  On the approximate realization of continuous mappings by neural networks , 1989, Neural Networks.

[2]  C S Cohan,et al.  Comparison of differential Pavlovian conditioning in whole animals and physiological preparations of Pleurobranchaea: implications of motor pattern variability. , 1986, Journal of neurobiology.

[3]  George J. Mpitsos,et al.  Variability and Chaos: Neurointegrative Principles in Self-Organization of Motor Patterns , 1989 .

[4]  Tang,et al.  Self-organized criticality. , 1988, Physical review. A, General physics.

[5]  P. Werbos,et al.  Beyond Regression : "New Tools for Prediction and Analysis in the Behavioral Sciences , 1974 .

[6]  T. Carew,et al.  Invertebrate learning and memory: from behavior to molecules. , 1986, Annual review of neuroscience.

[7]  G. Mpitsos,et al.  Evidence for chaos in spike trains of neurons that generate rhythmic motor patterns , 1988, Brain Research Bulletin.

[8]  E R John,et al.  Switchboard versus statistical theories of learning and memory. , 1972, Science.

[9]  George J. Mpitsos,et al.  In Search of a Unified Theory of Biological Organization: What Does the Motor System of a Sea Slug Tell Us About Human Motor Integration? , 1992 .

[10]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[11]  William G. Faris,et al.  Reliable evaluation of neural networks , 1991, Neural Networks.

[12]  Robert M. Burton,et al.  Event-dependent control of noise enhances learning in neural networks , 1992, Neural Networks.

[13]  M. Moulins,et al.  Suppressive control of the crustacean pyloric network by a pair of identified interneurons. II. Modulation of neuronal properties , 1990, The Journal of neuroscience : the official journal of the Society for Neuroscience.

[14]  G. Mpitsos,et al.  Immunohistochemistry of Diverging and Converging Neurotransmitter Systems in Mollusks. , 1991, The Biological bulletin.

[15]  W. R. Adey,et al.  Organization of brain tissue: is the brain a noisy processor? , 1972, The International journal of neuroscience.

[16]  G. Mpitsos Chaos in brain function and the problem of nonstationarity: a commentary , 1990 .

[17]  R Linsker,et al.  From basic network principles to neural architecture: emergence of spatial-opponent cells. , 1986, Proceedings of the National Academy of Sciences of the United States of America.

[18]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[19]  C S Cohan,et al.  Differential Pavlovian conditioning in the mollusc Pleurobranchaea. , 1986, Journal of neurobiology.

[20]  P Panula,et al.  Comparative study of histamine immunoreactivity in nervous systems of Aplysia and Pleurobranchaea , 1990, The Journal of comparative neurology.

[21]  T. H. Brown,et al.  Conductance mechanism responsible for long-term potentiation in monosynaptic and isolated excitatory synaptic inputs to hippocampus. , 1986, Journal of neurophysiology.

[22]  M. Cynader,et al.  Somatosensory cortical map changes following digit amputation in adult monkeys , 1984, The Journal of comparative neurology.

[23]  C. Cohan,et al.  Convergence in a distributed nervous system: parallel processing and self-organization. , 1986, Journal of neurobiology.

[24]  A. D. McClellan,et al.  Learning: a model system for physiological studies. , 1978, Science.

[25]  Robert M. Burton,et al.  Connectionist networks learn to transmit chaos , 1988, Brain Research Bulletin.

[26]  James L. McClelland,et al.  Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .

[27]  G. Edelman Neural Darwinism: The Theory Of Neuronal Group Selection , 1989 .

[28]  P. Bak,et al.  Self-organized criticality. , 1988, Physical review. A, General physics.

[29]  George J. Mpitsos,et al.  Learning in Gastropod Molluscs , 1985 .

[30]  V. Braitenberg,et al.  Some Arguments for a Theory of Cell Assemblies in the Cerebral Cortex , 1989 .

[31]  C. R. Fourtner,et al.  Flight Activity Initiated via Giant Interneurons of the Cockroach: Evidence for Bifunctional Trigger Interneurons , 1980, Science.

[32]  Andrew Howard Warren An investigation in size reduction in neural networks , 1989 .

[33]  C. Cohan,et al.  Discriminative behavior and Pavlovian conditioning in the mollusc Pleurobranchaea. , 1986, Journal of neurobiology.

[34]  W. Freeman,et al.  How brains make chaos in order to make sense of the world , 1987, Behavioral and Brain Sciences.