Minimax Time Series Prediction

We consider an adversarial formulation of the problem of predicting a time series with square loss. The aim is to predict an arbitrary sequence of vectors almost as well as the best smooth comparator sequence in retrospect. Our approach allows natural measures of smoothness such as the squared norm of increments. More generally, we consider a linear time series model and penalize the comparator sequence through the energy of the implied driving noise terms. We derive the minimax strategy for all problems of this type and show that it can be implemented efficiently. The optimal predictions are linear in the previous observations. We obtain an explicit expression for the regret in terms of the parameters defining the problem. For typical, simple definitions of smoothness, the computation of the optimal predictions involves only sparse matrices. In the case of norm-constrained data, where the smoothness is defined in terms of the squared norm of the comparator's increments, we show that the regret grows as T/√ λT, where T is the length of the game and λT is an increasing limit on comparator smoothness.

[1]  Mark Herbster,et al.  Tracking the Best Expert , 1995, Machine-mediated learning.

[2]  R. F. O’Connell,et al.  Analytical inversion of symmetric tridiagonal matrices , 1996 .

[3]  Avrim Blum,et al.  On-line Learning and the Metrical Task System Problem , 1997, COLT '97.

[4]  Manfred K. Warmuth,et al.  The Minimax Strategy for Gaussian Density Estimation. pp , 2000, COLT.

[5]  E. Takimoto,et al.  The Minimax Strategy for Gaussian Density Estimation , 2000 .

[6]  Manfred K. Warmuth,et al.  Tracking a Small Set of Experts by Mixing Past Posteriors , 2003, J. Mach. Learn. Res..

[7]  Mark Herbster,et al.  Tracking the Best Linear Predictor , 2001, J. Mach. Learn. Res..

[8]  Tommi S. Jaakkola,et al.  Online Learning of Non-stationary Sequences , 2003, NIPS.

[9]  P. Bartlett,et al.  Optimal strategies and minimax lower bounds for online convex games [Technical Report No. UCB/EECS-2008-19] , 2008 .

[10]  Yoav Freund,et al.  An Online Learning-based Framework for Tracking , 2010, UAI.

[11]  Nicolò Cesa-Bianchi,et al.  Mirror Descent Meets Fixed Share (and feels no regret) , 2012, NIPS.

[12]  Koby Crammer,et al.  A Last-Step Regression Algorithm for Non-Stationary Online Learning , 2013, AISTATS.

[13]  Wouter M. Koolen,et al.  Efficient Minimax Strategies for Square Loss Games , 2014, NIPS.

[14]  Koby Crammer,et al.  Weighted last-step min-max algorithm with improved sub-logarithmic regret , 2014, Theor. Comput. Sci..

[15]  Wouter M. Koolen,et al.  Minimax Fixed-Design Linear Regression , 2015, COLT.