Worst-case quadratic loss bounds for prediction using linear functions and gradient descent

Studies the performance of gradient descent (GD) when applied to the problem of online linear prediction in arbitrary inner product spaces. We prove worst-case bounds on the sum of the squared prediction errors under various assumptions concerning the amount of a priori information about the sequence to predict. The algorithms we use are variants and extensions of online GD. Whereas our algorithms always predict using linear functions as hypotheses, none of our results requires the data to be linearly related. In fact, the bounds proved on the total prediction loss are typically expressed as a function of the total loss of the best fixed linear predictor with bounded norm. All the upper bounds are tight to within constants. Matching lower bounds are provided in some cases. Finally, we apply our results to the problem of online prediction for classes of smooth functions.

[1]  R. W. Lucky,et al.  Techniques for adaptive equalization of digital communication systems , 1966 .

[2]  M. M. Sondhi,et al.  An adaptive echo canceller , 1967 .

[3]  M.M. Sondhi,et al.  Silencing echoes on the telephone network , 1980, Proceedings of the IEEE.

[4]  Gene H. Golub,et al.  Matrix computations , 1983 .

[5]  A. P. Dawid,et al.  Present position and potential developments: some personal views , 1984 .

[6]  Charles R. Johnson,et al.  Matrix analysis , 1985, Statistical Inference for Engineers and Data Scientists.

[7]  S. Haykin,et al.  Adaptive Filter Theory , 1986 .

[8]  S. Thomas Alexander,et al.  Adaptive Signal Processing , 1986, Texts and Monographs in Computer Science.

[9]  N. Littlestone Learning Quickly When Irrelevant Attributes Abound: A New Linear-Threshold Algorithm , 1987, 28th Annual Symposium on Foundations of Computer Science (sfcs 1987).

[10]  Bernard Widrow,et al.  Adaptive switching circuits , 1988 .

[11]  J. Mycielski A learning theorem for linear operators , 1988 .

[12]  D. Angluin Queries and Concept Learning , 1988 .

[13]  Vladimir Vovk,et al.  Aggregating strategies , 1990, COLT '90.

[14]  N. Littlestone Mistake bounds and logarithmic linear-threshold learning algorithms , 1990 .

[15]  M. Zwaan An introduction to hilbert space , 1990 .

[16]  Simon Haykin,et al.  Adaptive filter theory (2nd ed.) , 1991 .

[17]  Stefen Hui,et al.  Robust stability analysis of adaptation algorithms for single perceptron , 1991, IEEE Trans. Neural Networks.

[18]  Philip M. Long,et al.  On-line learning of linear functions , 1991, STOC '91.

[19]  Jan Mycielski,et al.  Application of learning theorems , 1991, Fundam. Informaticae.

[20]  Neri Merhav,et al.  Universal sequential learning and decision from individual data sequences , 1992, COLT '92.

[21]  Philip M. Long,et al.  The learning complexity of smooth functions of a single variable , 1992, COLT '92.

[22]  Neri Merhav,et al.  Universal prediction of individual sequences , 1992, IEEE Trans. Inf. Theory.

[23]  Andrew R. Barron,et al.  Universal approximation bounds for superpositions of a sigmoidal function , 1993, IEEE Trans. Inf. Theory.

[24]  David Haussler,et al.  How to use expert advice , 1993, STOC.

[25]  Manfred K. Warmuth,et al.  The Weighted Majority Algorithm , 1994, Inf. Comput..

[26]  Manfred K. Warmuth,et al.  Exponentiated Gradient Versus Gradient Descent for Linear Predictors , 1997, Inf. Comput..