Precise error analysis of the LASSO

A classical problem that arises in numerous signal processing applications asks for the reconstruction of an unknown, k-sparse signal x<sub>0</sub> ∈ ℝ<sup>n</sup> from underdetermined, noisy, linear measurements y = Ax<sub>0</sub> + z ∈ ℝ<sup>m</sup>. One standard approach is to solve the following convex program x̂ = arg min<sub>x</sub> ∥y - Ax∥<sub>2</sub>+λ∥x∥<sub>1</sub>, which is known as the ℓ<sub>2</sub>-LASSO. We assume that the entries of the sensing matrix A and of the noise vector z are i.i.d Gaussian with variances 1/m and σ<sup>2</sup>. In the large system limit when the problem dimensions grow to infinity, but in constant rates, we precisely characterize the limiting behavior of the normalized squared error ∥x̂ - x<sub>0</sub>∥<sub>2</sub><sup>2</sup>/σ<sup>2</sup>. Our numerical illustrations validate our theoretical predictions.

[1]  Mihailo Stojnic,et al.  A framework to characterize performance of LASSO algorithms , 2013, ArXiv.

[2]  Christos Thrampoulidis,et al.  The squared-error of generalized LASSO: A precise analysis , 2013, 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[3]  Mihailo Stojnic Upper-bounding ℓ1-optimization weak thresholds , 2013, ArXiv.

[4]  A. Belloni,et al.  Square-Root Lasso: Pivotal Recovery of Sparse Signals via Conic Programming , 2010, 1009.5689.

[5]  Joel A. Tropp,et al.  Living on the edge: A geometric theory of phase transitions in convex optimization , 2013, ArXiv.

[6]  M. Stojnic Various thresholds for $\ell_1$-optimization in compressed sensing , 2009 .

[7]  Pablo A. Parrilo,et al.  The Convex Geometry of Linear Inverse Problems , 2010, Foundations of Computational Mathematics.

[8]  M. Stojnic Upper-bounding $\ell_1$-optimization weak thresholds , 2013 .

[9]  A. Belloni,et al.  Square-Root Lasso: Pivotal Recovery of Sparse Signals via Conic Programming , 2011 .

[10]  Martin J. Wainwright,et al.  A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers , 2009, NIPS.

[11]  Christos Thrampoulidis,et al.  Estimating structured signals in sparse noise: A precise noise sensitivity analysis , 2014, 2014 52nd Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[12]  Andrea Montanari,et al.  The LASSO Risk for Gaussian Matrices , 2010, IEEE Transactions on Information Theory.

[13]  丸山 徹 Convex Analysisの二,三の進展について , 1977 .

[14]  Christos Thrampoulidis,et al.  A Tight Version of the Gaussian min-max theorem in the Presence of Convexity , 2014, ArXiv.

[15]  Terence Tao,et al.  The Dantzig selector: Statistical estimation when P is much larger than n , 2005, math/0506081.

[16]  Mihailo Stojnic,et al.  Various thresholds for ℓ1-optimization in compressed sensing , 2009, ArXiv.

[17]  Joel A. Tropp,et al.  Living on the edge: phase transitions in convex programs with random data , 2013, 1303.6672.

[18]  E. Candès,et al.  Stable signal recovery from incomplete and inaccurate measurements , 2005, math/0503066.

[19]  Andrea Montanari,et al.  The Noise-Sensitivity Phase Transition in Compressed Sensing , 2010, IEEE Transactions on Information Theory.

[20]  Christos Thrampoulidis,et al.  Simple error bounds for regularized noisy linear inverse problems , 2014, 2014 IEEE International Symposium on Information Theory.

[21]  Mihailo Stojnic,et al.  Meshes that trap random subspaces , 2013, ArXiv.

[22]  P. Bickel,et al.  SIMULTANEOUS ANALYSIS OF LASSO AND DANTZIG SELECTOR , 2008, 0801.1095.

[23]  Y. Gordon On Milman's inequality and random subspaces which escape through a mesh in ℝ n , 1988 .

[24]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[25]  M. Talagrand,et al.  Probability in Banach Spaces: Isoperimetry and Processes , 1991 .