Online Variance Minimization

We design algorithms for two online variance minimization problems. Specifically, in every trial t our algorithms get a covariance matrix ${\mathcal{C}}_t$and try to select a parameter vector wtsuch that the total variance over a sequence of trials $\sum_t {\boldsymbol{w}}_t^{\top}{\mathcal{C}}_t{\boldsymbol{w}}_t$is not much larger than the total variance of the best parameter vector u chosen in hindsight. Two parameter spaces are considered – the probability simplex and the unit sphere. The first space is associated with the problem of minimizing risk in stock portfolios and the second space leads to an online calculation of the eigenvector with minimum eigenvalue. For the first parameter space we apply the Exponentiated Gradient algorithm which is motivated with a relative entropy. In the second case the algorithm maintains a mixture of unit vectors which is represented as a density matrix. The motivating divergence for density matrices is the quantum version of the relative entropy and the resulting algorithm is a special case of the Matrix Exponentiated Gradient algorithm. In each case we prove bounds on the additional total variance incurred by the online algorithm over the best offline parameter.