Automatically inferring task context for continual learning

While neural network research typically focuses on models that learn to perform well within the context of a single task, models that operate in the real world are often required to learn multiple tasks or tasks that change under different contexts. Furthermore, in the real world the learning signal for each of these tasks usually arrives in sequence, rather than simultaneously in a batch, as in the deep learning setting. We propose a method to infer when the task context has changed when learning from a continual datastream, and to adjust the model’s learning accordingly to prevent interference between learned tasks. We show how to automatically infer the context of a previously learned task for use in the future (e.g. during model evaluation). These preliminary results show that learning autonomously in a continually changing environment is possible in neural network models. This learning is better suited to how data naturally arrives in a real world environment.