Learning algorithms for oscillatory networks with gap junctions and membrane currents

One of the most important problems for studying neural network models is the adjustment of parametas. Here we show how to formulate the problem as the minimization of the dilferslee betwen two limit cycles. The backprop%ation method for leaming algorithms is described .ss the application of grsdient descent to an eror fundion that computes this difference. A mathematical formulation is given that is applicable to any type of network model, and is applied to several models. For example, when learning in a network in which dl cells have a common, adjustable, bias current, the value of the bias ie adjustcd at a rate proportional to the difference between the sum of the target outputs and the sum of the actual outputs. When learning in a network of n cells where a target output is given for every cell. the learning algorithm splits into R indepndent leaming algorithms, one per cell. For networks containing gap junctions, a gap junction is modelled D conductance times the potential difference between the two adjacent cells. The requirement that a conductme g munt be positive is enionzed by replacing g by a functbn pos(g*) whose value is always positive, for example cxp(O.lg*), and deriving an algorithm that adjusts the parameta g* in place of g. When target output is specified for every cell in a network with gap junctions. the learning algorithm splits into fewer independent componeds, one for each gap-mnneaed subset of the network. The lemming algorithm for II gspmnneeted set of cells cannot be paralklized further. As a find example, a leaming algwithm is derived for a mutually inhibitory twc- cell network in which each cell has a membrane current. This generalized approach to backpropagation allows one to derive a learning algorithm for alntoat any model neural network given in tem of differmtid equations. It will be an essential tool for adjusting parmetem in small but complex network models.

[1]  A. Hodgkin,et al.  A quantitative description of membrane current and its application to conduction and excitation in nerve , 1952, The Journal of physiology.

[2]  S. Yoshizawa,et al.  An Active Pulse Transmission Line Simulating Nerve Axon , 1962, Proceedings of the IRE.

[3]  B. Katz,et al.  A study of synaptic transmission in the absence of nerve impulses , 1967, The Journal of physiology.

[4]  Marvin Minsky,et al.  Perceptrons: An Introduction to Computational Geometry , 1969 .

[5]  George A. Bekey System identification- an introduction and a survey , 1970 .

[6]  C. Stevens,et al.  Inward and delayed outward membrane currents in isolated neural somata under voltage clamp , 1971, The Journal of physiology.

[7]  C. Stevens,et al.  Voltage clamp studies of a transient outward membrane current in gastropod neural somata , 1971, The Journal of physiology.

[8]  C. Stevens,et al.  Prediction of repetitive firing behaviour from voltage clamp data on an isolated neurone soma , 1971, The Journal of physiology.

[9]  P. Werbos,et al.  Beyond Regression : "New Tools for Prediction and Analysis in the Behavioral Sciences , 1974 .

[10]  J. Jack,et al.  Electric current flow in excitable cells , 1975 .

[11]  D. Perkel,et al.  Electrotonic properties of neurons: steady-state compartmental model. , 1978, Journal of neurophysiology.

[12]  M. Benson,et al.  Parameter fitting in dynamic models , 1979 .

[13]  D. Hartline,et al.  Graded synaptic transmission between identified spiking neurons. , 1983, Journal of neurophysiology.

[14]  R. Calabrese,et al.  Neural control of heartbeat in the leech, Hirudo medicinalis. , 1983, Symposia of the Society for Experimental Biology.

[15]  B. Hille Ionic channels of excitable membranes , 2001 .

[16]  J J Hopfield,et al.  Neurons with graded response have collective computational properties like those of two-state neurons. , 1984, Proceedings of the National Academy of Sciences of the United States of America.

[17]  B Mulloney,et al.  Compartmental models of electrotonic structure and synaptic integration in an identified neurone. , 1984, The Journal of physiology.

[18]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[19]  James L. McClelland,et al.  Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .

[20]  A. Selverston,et al.  The Crustacean Stomatogastric System , 1987, Springer Berlin Heidelberg.

[21]  Pineda,et al.  Generalization of back-propagation to recurrent neural networks. , 1987, Physical review letters.

[22]  Barak A. Pearlmutter Learning State Space Trajectories in Recurrent Neural Networks , 1989, Neural Computation.

[23]  J. Hindmarsh,et al.  The assembly of ionic currents in a thalamic neuron III. The seven-dimensional model , 1989, Proceedings of the Royal Society of London. B. Biological Sciences.

[24]  Peter A. Getting Reconstruction of small neural networks , 1989 .

[25]  Ronald J. Williams,et al.  A Learning Algorithm for Continually Running Fully Recurrent Neural Networks , 1989, Neural Computation.

[26]  C. Koch,et al.  Multiple channels and calcium dynamics , 1989 .

[27]  Kenji Doya,et al.  Adaptive neural oscillator using continuous-time back-propagation learning , 1989, Neural Networks.

[28]  L. Abbott,et al.  Model neurons: From Hodgkin-Huxley to hopfield , 1990 .