An attempt is made to determine how a system can learn to reduce the descriptions of event sequences without losing information. It is shown that the learning system ought to concentrate on unexpected inputs and ignore expected ones. This insight leads to the construction of neural systems which learn to 'divide and conquer' by recursively composing sequences. The first system creates a self-organizing multilevel hierarchy of recurrent predictors. The second system involves only two recurrent networks: it tries to collapse a multi level predictor hierarchy into a single recurrent net. Experiments show that the system can require less computation per time step and much fewer training sequences than the conventional training algorithms for recurrent nets.<<ETX>>
[1]
J. Urgen Schmidhuber.
Neural Sequence Chunkers
,
1991
.
[2]
Jing Peng,et al.
An Efficient Gradient-Based Algorithm for On-Line Training of Recurrent Network Trajectories
,
1990,
Neural Computation.
[3]
Jürgen Schmidhuber,et al.
Learning to generate sub-goals for action sequences
,
1991
.
[4]
Ronald J. Williams,et al.
Experimental Analysis of the Real-time Recurrent Learning Algorithm
,
1989
.
[5]
J. Urgen Schmidhuber.
Adaptive Decomposition Of Time
,
1991
.
[6]
Jürgen Schmidhuber,et al.
Reinforcement Learning in Markovian and Non-Markovian Environments
,
1990,
NIPS.