Weight modifications in traditional neural nets are computed by hard-wired algorithms. Without exception, all previous weight change algorithms have many specific limitations. Is it (in principle) possible to overcome limitations of hard-wired algorithms by allowing neural nets to run and improve their own weight change algorithms? This paper constructively demonstrates that the answer (in principle) is ‘yes’. I derive an initial gradientbased sequence learning algorithm for a ‘self-referential’ recurrent network that can ‘speak’ about its own weight matrix in terms of activations. It uses some of its input and output units for observing its own errors and for explicitly analyzing and modifying its own weight matrix, including those parts of the weight matrix responsible for analyzing and modifying the weight matrix. The result is the first ‘introspective’ neural net with explicit potential control over all of its own adaptive parameters. A disadvantage of the algorithm is its high computational complexity per time step which is independent of the sequence length and equals O(nconnlognconn), where riconn is the number of connections. Another disadvantage is the high number of local minima of the unusually complex error surface. The purpose of this paper, however, is not to come up with the most efficient ‘introspective’ or ‘self-referential’ weight change algorithm, but to show that such algorithms are possible at all.
[1]
Jürgen Schmidhuber,et al.
A Fixed Size Storage O(n3) Time Complexity Learning Algorithm for Fully Recurrent Continually Running Networks
,
1992,
Neural Computation.
[2]
Ronald J. Williams,et al.
A Learning Algorithm for Continually Running Fully Recurrent Neural Networks
,
1989,
Neural Computation.
[3]
Jürgen Schmidhuber,et al.
Learning to Control Fast-Weight Memories: An Alternative to Dynamic Recurrent Networks
,
1992,
Neural Computation.
[4]
J. Schmidhuber,et al.
A neural network that embeds its own meta-levels
,
1993,
IEEE International Conference on Neural Networks.
[5]
J. Schmidhuber.
An 'introspective' network that can learn to run its own weight change algorithm
,
1993
.