Predicting Temporal Patterns In The Environment: Toward Primitive Mechanisms Of Learning, Memory, And Generalization

Across a wide range of cognitive tasks, recent experience influences subsequent behavior. For example, when individuals repeatedly perform a speeded two-alternative choice task, response latencies vary dramatically based on the immediately preceding sequence. These sequential dependencies (SDs) have been interpreted as adaptation to the statistical structure of an uncertain, changing environment (e.g., Jones & Sieck, 2003; Mozer, Kinoshita, & Shettel, 2007; Yu & Cohen, 2009), and can shed light on how individuals learn and represent structure in binary stimulus sequences. Heretofore, theories have posited that SDs arise from rapidly (exponentially) decaying memory traces of various environmental statistics (e.g., Cho et al., 2002; Yu & Cohen, 2009).We present a series of experiments and a model that place SDs on a fundamentally different foundation. We show that: (1) decay of recent experience can follow a power function curve, not an exponential, linking the SD literature to a rich literature on human declarative memory; (2) the simple trace-based mechanism underlying existing accounts is inadequate, but incremental memory adjustments may be explained via error correction, linking the SD literature to the rich literature on human associative learning; and (3) distinct but interacting subsystems are found in the brain that jointly predict upcoming environmental events.We conducted three behavioral studies with EEG recordings of individuals performing discrimination of spatial location and motion coherence. Identifying the onset of the lateralized readiness potential (LRP) in an event-related EEG analysis, we are able to decompose the total response latency into two intervals—pre and post LRP onset—and to examine SDs in stimulus and response processing separately. We find evidence for two distinct mechanisms, one reflecting incremental learning of stimulus repetition rate (i.e., the probability that successive stimuli will match), and the other reflecting incremental learning of response baserates. The data cannot be explained by a model that assumes these rates are based on independent traces, and calls for an account in which the two rates jointly predict future stimuli via error-correction learning.By manipulating the autocorrelation structure of the sequences (from a positive to a negative autocorrelation, indicated on the graphs by blue and red lines, respectively), we obtained evidence for incremental learning occurring over hundreds of trials, which is parsimoniously explained by a memory with power function decay. Together, the results highlight a tension between the two broad and well established classes of trace-based memory models and learning models based on error correction. Two attempts at reconciling these approaches via modeling are discussed.