Adaptive Decomposition Of Time

In this paper we introduce design principles for unsupervised detection of regularities (like causal relationships) in temporal sequences. One basic idea is to train an adaptive predictor module to predict future events from past events, and to train an additional condence module to model the reliability of the predictor's predictions. We select system states at those points in time where there are changes in prediction reliability, and use them recur-sively as inputs for higher-level predictors. This can be benecial foràdaptive sub-goal generation' as well as for`conventional' goal-directed (supervised and reinforcement) learning: Systems based on these design principles were successfully tested on tasks where conventional training algorithms for recurrent nets fail. Finally we describe the principles of the rst neural sequencèchunker' which collapses a self-organizing multi-level predictor hierarchy into a single recurrent network. This paper is based on thèprinciple of reduced history description': As long as an adaptive sequence processing dynamic system is able to predict future environmental inputs from previous ones, no additional knowledge can be obtained by observing these inputs in reality. Only unpredicted inputs deserve attention ([7][4][9]). This paper demonstrates that it can be very ecient to focus on unexpected inputs and ignore expected ones. First we motivate this work by describing a major problem of`conventional' learning algorithms for time-varying inputs, namely, the problem of long time lags between relevant inputs. Then we introduce a principle for unsupervised detection of causal chains in streams of input events. Short representations of`presumed causal chains' recursively serve as inputs for`higher-level' detec