Learning representations through stochastic gradient descent in cross-validation error

Representations are fundamental to Artificial Intelligence. Typically, the performance of a learning system depends on its data representation. These data representations are usually hand-engineered based on some prior domain knowledge regarding the task. More recently, the trend is to learn these representations through deep neural networks as these can produce dramatical performance improvements over hand-engineered data representations. In this paper, we present a new incremental learning algorithm, called crossprop, for learning representations based on prior learning experiences. Unlike backpropagation, crossprop minimizes the cross-validation error. Specifically, our algorithm considers the influences of all the past weights on the current squared error, and uses this gradient for incrementally learning the weights in a neural network. This idea is similar to that of tuning the learning system through an offline cross-validation procedure. Crossprop is applicable to incremental learning tasks, where a sequence of examples are encountered by the learning system and they need to be processed one by one and then discarded. The learning system can use each example only once and can spend only a limited amount of computation for an example. From our preliminary experiments, we concluce that crossprop is a promising alternative for backprop for representation learning.