Fixed-weight on-line learning

Conventional artificial neural networks perform functional mappings from their input space to their output space. The synaptic weights encode information about the mapping in a manner analogous to long-term memory in biological systems. This paper presents a method of designing neural networks where recurrent signal loops store this knowledge in a manner analogous to short-term memory. The synaptic weights of these networks encode a learning algorithm. This gives these networks the ability to dynamically learn any functional mapping from a (possibly very large) set, without changing any synaptic weights. These networks are adaptive dynamic systems. Learning is online continually taking place as part of the network's overall behavior instead of a separate, externally driven process. We present four higher order fixed-weight learning networks. Two of these networks have standard backpropagation embedded in their synaptic weights. The other two utilize a more efficient gradient-descent-based learning rule. This new learning scheme was discovered by examining variations in fixed-weight topology. We present empirical tests showing that all these networks were able to successfully learn functions from both discrete (Boolean) and continuous function sets. Largely, the networks were robust with respect to perturbations in the synaptic weights. The exception was the recurrent connections used to store information. These required a tight tolerance of 0.5%. We found that the cost of these networks scaled approximately in proportion to the total number of synapses. We consider evolving fixed weight networks tailored to a specific problem class by analyzing the meta-learning cost surface of the networks presented.

[1]  Alan S. Lapedes,et al.  A self-optimizing, nonsymmetrical neural net for content addressable memory and pattern recognition , 1986 .

[2]  I. Gould,et al.  Return electron transfer within geminate radical ion pairs. Observation of the marcus inverted region , 1987 .

[3]  J. L. Gould,et al.  Learning by Instinct , 1987 .

[4]  Robert M. Farber,et al.  Programming a massively parallel, computation universal system: Static behavior , 1987 .

[5]  Pineda,et al.  Generalization of back-propagation to recurrent neural networks. , 1987, Physical review letters.

[6]  D. O. Hebb,et al.  The organization of behavior , 1988 .

[7]  Ronald J. Williams,et al.  A Learning Algorithm for Continually Running Fully Recurrent Neural Networks , 1989, Neural Computation.

[8]  W. Pitts,et al.  A Logical Calculus of the Ideas Immanent in Nervous Activity (1943) , 2021, Ideas That Created the Future.

[9]  Mark A. Holler,et al.  VLSI Implementations of Learning and Memory Systems: A Review , 1990, NIPS 1990.

[10]  Peter R. Conwell,et al.  Fixed-weight networks can learn , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[11]  L. B. Almeida A learning rule for asynchronous perceptrons with feedback in a combinatorial environment , 1990 .

[12]  Peter R. Conwell,et al.  Methuselah networks and optimal control , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[13]  Barak A. Pearlmutter Dynamic recurrent neural networks , 1990 .

[14]  Anders Krogh,et al.  Introduction to the theory of neural computation , 1994, The advanced book program.

[15]  N. E. Cotter,et al.  Learning algorithms and fixed dynamics , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[16]  Peter R. Conwell,et al.  Universal Approximation by Phase Series and Fixed-Weight Networks , 1993, Neural Computation.

[17]  K M Johnson,et al.  Optoelectronic array that computes error and weight modification for a bipolar optical neural network. , 1993, Applied optics.

[18]  R. Palmer,et al.  Introduction to the theory of neural computation , 1994, The advanced book program.

[19]  Emile Fiesler,et al.  Neural network classification and formalization , 1994 .

[20]  Tom Heskes,et al.  How Dependencies between Successive Examples Affect On-Line Learning , 1996, Neural Computation.