Asymptotically Exact Error Analysis for the Generalized $\ell_2^2$-LASSO

Given an unknown signal $\mathbf{x}_0\in\mathbb{R}^n$ and linear noisy measurements $\mathbf{y}=\mathbf{A}\mathbf{x}_0+\sigma\mathbf{v}\in\mathbb{R}^m$, the generalized $\ell_2^2$-LASSO solves $\hat{\mathbf{x}}:=\arg\min_{\mathbf{x}}\frac{1}{2}\|\mathbf{y}-\mathbf{A}\mathbf{x}\|_2^2 + \sigma\lambda f(\mathbf{x})$. Here, $f$ is a convex regularization function (e.g. $\ell_1$-norm, nuclear-norm) aiming to promote the structure of $\mathbf{x}_0$ (e.g. sparse, low-rank), and, $\lambda\geq 0$ is the regularizer parameter. A related optimization problem, though not as popular or well-known, is often referred to as the generalized $\ell_2$-LASSO and takes the form $\hat{\mathbf{x}}:=\arg\min_{\mathbf{x}}\|\mathbf{y}-\mathbf{A}\mathbf{x}\|_2 + \lambda f(\mathbf{x})$, and has been analyzed in [1]. [1] further made conjectures about the performance of the generalized $\ell_2^2$-LASSO. This paper establishes these conjectures rigorously. We measure performance with the normalized squared error $\mathrm{NSE}(\sigma):=\|\hat{\mathbf{x}}-\mathbf{x}_0\|_2^2/\sigma^2$. Assuming the entries of $\mathbf{A}$ and $\mathbf{v}$ be i.i.d. standard normal, we precisely characterize the "asymptotic NSE" $\mathrm{aNSE}:=\lim_{\sigma\rightarrow 0}\mathrm{NSE}(\sigma)$ when the problem dimensions $m,n$ tend to infinity in a proportional manner. The role of $\lambda,f$ and $\mathbf{x}_0$ is explicitly captured in the derived expression via means of a single geometric quantity, the Gaussian distance to the subdifferential. We conjecture that $\mathrm{aNSE} = \sup_{\sigma>0}\mathrm{NSE}(\sigma)$. We include detailed discussions on the interpretation of our result, make connections to relevant literature and perform computational experiments that validate our theoretical findings.

[1]  Christos Thrampoulidis,et al.  Precise error analysis of the LASSO , 2015, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[2]  Christos Thrampoulidis,et al.  A Tight Version of the Gaussian min-max theorem in the Presence of Convexity , 2014, ArXiv.

[3]  Terence Tao,et al.  The Dantzig selector: Statistical estimation when P is much larger than n , 2005, math/0506081.

[4]  Christos Thrampoulidis,et al.  The squared-error of generalized LASSO: A precise analysis , 2013, 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[5]  Christos Thrampoulidis,et al.  Simple error bounds for regularized noisy linear inverse problems , 2014, 2014 IEEE International Symposium on Information Theory.

[6]  Roman Vershynin,et al.  Introduction to the non-asymptotic analysis of random matrices , 2010, Compressed Sensing.

[7]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[8]  P. Bickel,et al.  SIMULTANEOUS ANALYSIS OF LASSO AND DANTZIG SELECTOR , 2008, 0801.1095.

[9]  M. Stojnic Various thresholds for $\ell_1$-optimization in compressed sensing , 2009 .

[10]  Babak Hassibi,et al.  Asymptotically Exact Denoising in Relation to Compressed Sensing , 2013, ArXiv.

[11]  Mihailo Stojnic,et al.  A framework to characterize performance of LASSO algorithms , 2013, ArXiv.

[12]  Joel A. Tropp,et al.  Living on the edge: phase transitions in convex programs with random data , 2013, 1303.6672.

[13]  Richard G. Baraniuk,et al.  From Denoising to Compressed Sensing , 2014, IEEE Transactions on Information Theory.

[14]  A. Belloni,et al.  Square-Root Lasso: Pivotal Recovery of Sparse Signals via Conic Programming , 2011 .

[15]  Martin J. Wainwright,et al.  A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers , 2009, NIPS.

[16]  I. Johnstone,et al.  Minimax risk overlp-balls forlp-error , 1994 .

[17]  Pablo A. Parrilo,et al.  The Convex Geometry of Linear Inverse Problems , 2010, Foundations of Computational Mathematics.

[18]  R. Gill,et al.  Cox's regression model for counting processes: a large sample study : (preprint) , 1982 .

[19]  Y. Gordon On Milman's inequality and random subspaces which escape through a mesh in ℝ n , 1988 .