Author ' s personal copy 1 . 34 Neural Computation Theories of Learning

The anatomical discoveries in the nineteenth century and the physiological studies in the twentieth century showed that brains were networks of neurons connected through synapses. This led to the theory that learning could be the consequence of changes in the strengths of the synapses. The best-known theory of learning based on synaptic plasticity is that proposed by Donald Hebb, who postulated that connection strengths between neurons are modified based on neural activities in the presynaptic and postsynaptic cells: When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased. (Hebb, 1949) This postulate was experimentally confirmed in the hippocampus with high-frequency stimulation of a presynaptic neuron that caused long-term potentiation (LTP) in the synapses connecting it to the postsynaptic neuron (Bliss and Lomo, 1973). LTP takes place only if the postsynaptic cell is also active and sufficiently depolarized (Kelso et al., 1986). This is due to the N-methyl-D-aspartate (NMDA) type of glutamate receptor, which opens when glutamate is bound to the receptor, and the postsynaptic cell is sufficiently depolarized at the same time (See Chapters 1.33, 1.35). Hebb's postulate has served as the starting point for studying the learning capabilities of artificial neural networks (ANN) and for the theoretical analysis and computational modeling of biological neural systems. The architecture of an ANN determines its behavior and learning capabilities. The architecture of a network is defined by the connections among the artificial neural units and the function that each unit performs on its inputs (See Chapter 1.35). Two general classes are feedforward and recurrent architecture. The simplest feedforward network has one layer of input units and one layer of output units (Figure 1, left). All connections are unidirectional and project from the input units to the output units. The percep-tron is an example of a simple feedforward network (Rosenblatt, 1958). It can learn to classify patterns from examples. It turned out that the perceptron can only classify patterns that are linearly separable – that is, if the positive patterns can be separated from all negative patterns by a plane in the space of input patterns. More powerful multilayer feedforward networks can discriminate patterns that are not linearly separable. In a multilayer feedforward network, the …

[1]  T. J. Sullivan,et al.  Homeostatic synaptic scaling in self-organizing maps , 2006, Neural Networks.

[2]  Gerald Tesauro,et al.  Temporal Difference Learning and TD-Gammon , 1995, J. Int. Comput. Games Assoc..

[3]  D. Debanne,et al.  Long-term plasticity of intrinsic excitability: learning rules and mechanisms. , 2003, Learning & memory.

[4]  Terrence J. Sejnowski,et al.  A Parallel Network that Learns to Play Backgammon , 1989, Artif. Intell..

[5]  Te-Won Lee,et al.  Independent Component Analysis , 1998, Springer US.

[6]  Terrence J. Sejnowski,et al.  An Information-Maximization Approach to Blind Separation and Blind Deconvolution , 1995, Neural Computation.

[7]  James L. McClelland,et al.  A homeostatic rule for inhibitory synapses promotes temporal sharpening and cortical reorganization , 2006, Proceedings of the National Academy of Sciences.

[8]  Teuvo Kohonen,et al.  Self-organized formation of topologically correct feature maps , 2004, Biological Cybernetics.

[9]  Geoffrey E. Hinton,et al.  Reducing the Dimensionality of Data with Neural Networks , 2006, Science.

[10]  T. Sejnowski,et al.  3 3 A Computational Model of Avian Song Learning , 2000 .

[11]  Ila R Fiete,et al.  Temporal sparseness of the premotor drive is important for rapid learning in a neural network model of birdsong. , 2004, Journal of neurophysiology.

[12]  Kenneth D. Miller,et al.  The Role of Constraints in Hebbian Learning , 1994, Neural Computation.

[13]  F ROSENBLATT,et al.  The perceptron: a probabilistic model for information storage and organization in the brain. , 1958, Psychological review.

[14]  Gerald Tesauro,et al.  Temporal difference learning and TD-Gammon , 1995, CACM.

[15]  L. F. Abbott,et al.  A Model of Spatial Map Formation in the Hippocampus of the Rat , 1999, Neural Computation.

[16]  John S. Edwards,et al.  The Hedonistic Neuron: A Theory of Memory, Learning and Intelligence , 1983 .

[17]  H. Markram,et al.  Regulation of Synaptic Efficacy by Coincidence of Postsynaptic APs and EPSPs , 1997, Science.

[18]  Terrence J. Sejnowski,et al.  The Hebb Rule for Synaptic Plasticity: Algorithms and Implementations , 1989 .

[19]  Niraj S. Desai,et al.  Plasticity in the intrinsic excitability of cortical pyramidal neurons , 1999, Nature Neuroscience.

[20]  T. Sejnowski,et al.  The Book of Hebb , 1999, Neuron.

[21]  S. Kelso,et al.  Hebbian synapses in hippocampus. , 1986, Proceedings of the National Academy of Sciences of the United States of America.

[22]  S. Nelson,et al.  Hebb and homeostasis in neuronal plasticity , 2000, Current Opinion in Neurobiology.

[23]  S. G. Lisberger,et al.  Motor learning in a recurrent network model based on the vestibulo–ocular reflex , 1992, Nature.

[24]  Rajesh P. N. Rao,et al.  Self–organizing neural systems based on predictive learning , 2003, Philosophical Transactions of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences.

[25]  A. Gittis,et al.  Intrinsic and synaptic plasticity in the vestibular system , 2006, Current Opinion in Neurobiology.

[26]  R. Kempter,et al.  Formation of temporal-feature maps by axonal propagation of synaptic learning , 2001, Proceedings of the National Academy of Sciences of the United States of America.

[27]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[28]  J J Hopfield,et al.  Neural networks and physical systems with emergent collective computational abilities. , 1982, Proceedings of the National Academy of Sciences of the United States of America.

[29]  H. Seung,et al.  Learning in Spiking Neural Networks by Reinforcement of Stochastic Synaptic Transmission , 2003, Neuron.

[30]  E. Bienenstock,et al.  Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex , 1982, The Journal of neuroscience : the official journal of the Society for Neuroscience.

[31]  Jonathan R. Whitlock,et al.  Learning Induces Long-Term Potentiation in the Hippocampus , 2006, Science.

[32]  Daniel Johnston,et al.  LTP is accompanied by an enhanced local excitability of pyramidal neuron dendrites , 2004, Nature Neuroscience.

[33]  J. Knott The organization of behavior: A neuropsychological theory , 1951 .

[34]  T. Mexia,et al.  Author ' s personal copy , 2009 .

[35]  L. Abbott,et al.  Cascade Models of Synaptically Stored Memories , 2005, Neuron.

[36]  Christof Koch,et al.  How voltage-dependent conductances can adapt to maximize the information encoded by neuronal firing rate , 1999, Nature Neuroscience.

[37]  D. Ferster,et al.  Neural mechanisms of orientation selectivity in the visual cortex. , 2000, Annual review of neuroscience.

[38]  D. Linden,et al.  Rapid, synaptically driven increases in the intrinsic excitability of cerebellar deep nuclear neurons , 2000, Nature Neuroscience.

[39]  T. Bliss,et al.  Long‐lasting potentiation of synaptic transmission in the dentate area of the anaesthetized rabbit following stimulation of the perforant path , 1973, The Journal of physiology.

[40]  G. Bi,et al.  Synaptic Modifications in Cultured Hippocampal Neurons: Dependence on Spike Timing, Synaptic Strength, and Postsynaptic Cell Type , 1998, The Journal of Neuroscience.

[41]  L. F. Abbott,et al.  Supervised Learning Through Neuronal Response Modulation , 2005, Neural Computation.

[42]  N. Swindale The development of topography in the visual cortex: a review of models. , 1996, Network.

[43]  Tzyy-Ping Jung,et al.  Imaging brain dynamics using independent component analysis , 2001, Proc. IEEE.