Neural Computation Theories of Learning q

This postulate was experimentally confirmed in the hippocampus, where high-frequency stimulation (HFS) of a presynaptic neuron causes long-term potentiation (LTP) in the synapses connecting it to the postsynaptic neurons (Bliss and Lomo, 1973). LTP takes place only if the postsynaptic cell is also active and sufficiently depolarized (Kelso et al., 1986). In many brain areas, this is due to the N-methyl-D-aspartate (NMDA) type of glutamate receptor, which opens when glutamate is bound to the receptor and the postsynaptic cell is sufficiently depolarized at the same time. Hebb’s rule has served as the starting point for studying the learning capabilities of artificial neural networks (ANN) and for the theoretical analysis and computational modeling of biological neural systems (Hertz et al., 1991). The architecture of an ANN determines its behavior and learning capabilities. The architecture of a network is defined by the connections among the artificial neural units and the function that each unit performs on its inputs. Two general classes of network models have feedforward and recurrent architectures. The simplest feedforward network has one layer of input units and one layer of output units (Fig. 1, left). All connections are unidirectional and project from the input units to the output units. Perceptron is an example of such a simple feedforward network (Rosenblatt, 1958). It can learn to classify patterns from examples. It turned out that perceptron can only classify patterns that are linearly separable – that is, if the positive patterns can be separated from all negative patterns by a plane in the space of input patterns. More powerful multilayer feedforward networks can discriminate patterns that are not linearly separable. In a multilayer feedforward network, the “hidden” layers of units between the input and output layers allow more flexibility in learning features (Rumelhart et al., 1986). Multilayer feedforward networks can solve some difficult problems (Rumelhart and McClelland, 1986) and underlie current rapid development of the field of deep learning in machine learning (LeCun et al., 2015). In contrast to strictly feedforward network models, recurrent networks also have feedback connections among units in the network (Fig. 1, right). A simple recurrent network can have a uniform architecture such as all-to-all connectivity combined with symmetrical weights between units as in Hopfield network (Hopfield, 1982), or it can be a network with specific connections designed to model a particular biological system (Sporns, 2010).

[1]  F ROSENBLATT,et al.  The perceptron: a probabilistic model for information storage and organization in the brain. , 1958, Psychological review.

[2]  D. Hubel,et al.  Receptive fields of single neurones in the cat's striate cortex , 1959, The Journal of physiology.

[3]  T. Bliss,et al.  Long‐lasting potentiation of synaptic transmission in the dentate area of the anaesthetized rabbit following stimulation of the perforant path , 1973, The Journal of physiology.

[4]  J J Hopfield,et al.  Neural networks and physical systems with emergent collective computational abilities. , 1982, Proceedings of the National Academy of Sciences of the United States of America.

[5]  E. Bienenstock,et al.  Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex , 1982, The Journal of neuroscience : the official journal of the Society for Neuroscience.

[6]  John S. Edwards,et al.  The Hedonistic Neuron: A Theory of Memory, Learning and Intelligence , 1983 .

[7]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[8]  S. Kelso,et al.  Hebbian synapses in hippocampus. , 1986, Proceedings of the National Academy of Sciences of the United States of America.

[9]  Terrence J. Sejnowski,et al.  A Parallel Network that Learns to Play Backgammon , 1989, Artif. Intell..

[10]  Anders Krogh,et al.  Introduction to the theory of neural computation , 1994, The advanced book program.

[11]  S. G. Lisberger,et al.  Motor learning in a recurrent network model based on the vestibulo–ocular reflex , 1992, Nature.

[12]  Terrence J. Sejnowski,et al.  An Information-Maximization Approach to Blind Separation and Blind Deconvolution , 1995, Neural Computation.

[13]  M. Fanselow,et al.  Synaptic plasticity in the basolateral amygdala induced by hippocampal formation stimulation in vivo , 1995, The Journal of neuroscience : the official journal of the Society for Neuroscience.

[14]  P A Salin,et al.  Corticocortical connections in the visual system: structure and function. , 1995, Physiological reviews.

[15]  L. F. Abbott,et al.  A Model of Spatial Map Formation in the Hippocampus of the Rat , 1999, Neural Computation.

[16]  N. Swindale The development of topography in the visual cortex: a review of models. , 1996, Network.

[17]  H. Markram,et al.  Regulation of Synaptic Efficacy by Coincidence of Postsynaptic APs and EPSPs , 1997, Science.

[18]  G. Bi,et al.  Synaptic Modifications in Cultured Hippocampal Neurons: Dependence on Spike Timing, Synaptic Strength, and Postsynaptic Cell Type , 1998, The Journal of Neuroscience.

[19]  Niraj S. Desai,et al.  Plasticity in the intrinsic excitability of cortical pyramidal neurons , 1999, Nature Neuroscience.

[20]  T. Sejnowski,et al.  The Book of Hebb , 1999, Neuron.

[21]  Christof Koch,et al.  How voltage-dependent conductances can adapt to maximize the information encoded by neuronal firing rate , 1999, Nature Neuroscience.

[22]  T. Sejnowski,et al.  3 3 A Computational Model of Avian Song Learning , 2000 .

[23]  Erkki Oja,et al.  Independent component analysis: algorithms and applications , 2000, Neural Networks.

[24]  S. Nelson,et al.  Hebb and homeostasis in neuronal plasticity , 2000, Current Opinion in Neurobiology.

[25]  D. Ferster,et al.  Neural mechanisms of orientation selectivity in the visual cortex. , 2000, Annual review of neuroscience.

[26]  D. Linden,et al.  Rapid, synaptically driven increases in the intrinsic excitability of cerebellar deep nuclear neurons , 2000, Nature Neuroscience.

[27]  R. Kempter,et al.  Formation of temporal-feature maps by axonal propagation of synaptic learning , 2001, Proceedings of the National Academy of Sciences of the United States of America.

[28]  Tzyy-Ping Jung,et al.  Imaging brain dynamics using independent component analysis , 2001, Proc. IEEE.

[29]  Y. Ben-Ari,et al.  Long-term plasticity at GABAergic and glycinergic synapses: mechanisms and functional significance , 2002, Trends in Neurosciences.

[30]  D. Debanne,et al.  Long-term plasticity of intrinsic excitability: learning rules and mechanisms. , 2003, Learning & memory.

[31]  Rajesh P. N. Rao,et al.  Self–organizing neural systems based on predictive learning , 2003, Philosophical Transactions of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences.

[32]  H. Seung,et al.  Learning in Spiking Neural Networks by Reinforcement of Stochastic Synaptic Transmission , 2003, Neuron.

[33]  Rajesh P. N. Rao,et al.  Motion detection and prediction through spike-timing dependent plasticity. , 2004, Network.

[34]  Teuvo Kohonen,et al.  Self-organized formation of topologically correct feature maps , 2004, Biological Cybernetics.

[35]  Ila R Fiete,et al.  Temporal sparseness of the premotor drive is important for rapid learning in a neural network model of birdsong. , 2004, Journal of neurophysiology.

[36]  C. Gilbert,et al.  Perceptual learning and top-down influences in primary visual cortex , 2004, Nature Neuroscience.

[37]  Daniel Johnston,et al.  LTP is accompanied by an enhanced local excitability of pyramidal neuron dendrites , 2004, Nature Neuroscience.

[38]  James L. McClelland Parallel Distributed Processing , 2005 .

[39]  L. Abbott,et al.  Cascade Models of Synaptically Stored Memories , 2005, Neuron.

[40]  L. F. Abbott,et al.  Supervised Learning Through Neuronal Response Modulation , 2005, Neural Computation.

[41]  T. J. Sullivan,et al.  Homeostatic synaptic scaling in self-organizing maps , 2006, Neural Networks.

[42]  James L. McClelland,et al.  A homeostatic rule for inhibitory synapses promotes temporal sharpening and cortical reorganization , 2006, Proceedings of the National Academy of Sciences.

[43]  Geoffrey E. Hinton,et al.  Reducing the Dimensionality of Data with Neural Networks , 2006, Science.

[44]  A. Gittis,et al.  Intrinsic and synaptic plasticity in the vestibular system , 2006, Current Opinion in Neurobiology.

[45]  Jonathan R. Whitlock,et al.  Learning Induces Long-Term Potentiation in the Hippocampus , 2006, Science.

[46]  A. Sillito,et al.  Always returning: feedback and sensory processing in visual cortex and thalamus , 2006, Trends in Neurosciences.

[47]  D. Feldman Synaptic mechanisms for plasticity in neocortex. , 2009, Annual review of neuroscience.

[48]  O. Sporns Networks of the Brain , 2010 .

[49]  G. Turrigiano Too many cooks? Intrinsic and synaptic homeostatic mechanisms in cortical circuit refinement. , 2011, Annual review of neuroscience.

[50]  C. Gilbert,et al.  Top-Down Modulation of Lateral Interactions in Visual Cortex , 2013, The Journal of Neuroscience.

[51]  Terrence J. Sejnowski,et al.  Top-Down Inputs Enhance Orientation Selectivity in Neurons of the Primary Visual Cortex during Perceptual Learning , 2014, PLoS Comput. Biol..

[52]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[53]  Demis Hassabis,et al.  Mastering the game of Go with deep neural networks and tree search , 2016, Nature.