Online gradient descent for least squares regression: Non-asymptotic bounds and application to bandits

We improve the computational complexity of online learning algorithms that require to often recompute least squares regression estimates of parameters. We propose two online gradient descent schemes with randomisation of samples in order to efficiently track the true solutions of the regression problems. Both algorithms result in an O(d) improvement in complexity, where d is the dimension of the data. This makes it attractive to implement these algorithms in the big data setting where d is large. The first algorithm assumes strong convexity in the regression problem, and we provide bounds on the error both in expectation and high probability (the latter is often needed to provide theoretical guarantees for higher level algorithms). The second algorithm deals with cases where strong convexity of the regression problem cannot be guaranteed and uses adaptive regularisation. We combine the first algorithm with the PEGE linear bandit algorithm Rusmevichientong and Tsitsiklis [2010] and show that we lose only logarithmic factors in the regret performance of PEGE. We empirically test the second in a news article recommendation application, which uses the large scale news recommendation dataset from Yahoo! front page. These experiments show a large gain in computational complexity, with no appreciable loss in performance.

[1]  Ohad Shamir,et al.  Making Gradient Descent Optimal for Strongly Convex Stochastic Optimization , 2011, ICML.

[2]  Yuan Yao,et al.  Online Learning as Stochastic Approximation of Regularization Paths: Optimality and Almost-Sure Convergence , 2011, IEEE Transactions on Information Theory.

[3]  Martin Zinkevich,et al.  Online Convex Programming and Generalized Infinitesimal Gradient Ascent , 2003, ICML.

[4]  M. Fathi,et al.  Transport-Entropy inequalities and deviation estimates for stochastic approximation schemes , 2013, 1301.7740.

[5]  Thomas P. Hayes,et al.  Stochastic Linear Optimization under Bandit Feedback , 2008, COLT.

[6]  Elad Hazan,et al.  An optimal algorithm for stochastic strongly-convex optimization , 2010, 1006.2425.

[7]  Wei Chu,et al.  A contextual-bandit approach to personalized news article recommendation , 2010, WWW '10.

[8]  Mark W. Schmidt,et al.  A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets , 2012, NIPS.

[9]  Shai Shalev-Shwartz,et al.  Stochastic dual coordinate ascent methods for regularized loss , 2012, J. Mach. Learn. Res..

[10]  John N. Tsitsiklis,et al.  Linearly Parameterized Bandits , 2008, Math. Oper. Res..

[11]  Eric Moulines,et al.  Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning , 2011, NIPS.

[12]  S. Menozzi,et al.  Concentration bounds for stochastic approximations , 2012, 1204.3730.

[13]  Tong Zhang,et al.  Accelerating Stochastic Gradient Descent using Predictive Variance Reduction , 2013, NIPS.