A Tight Version of the Gaussian min-max theorem in the Presence of Convexity

Gaussian comparison theorems are useful tools in probability theory; they are essential ingredients in the classical proofs of many results in empirical processes and extreme value theory. More recently, they have been used extensively in the analysis of underdetermined linear inverse problems. A prominent role in the study of those problems is played by Gordon's Gaussian min-max theorem. It has been observed that the use of the Gaussian min-max theorem produces results that are often tight. Motivated by recent work due to M. Stojnic, we argue explicitly that the theorem is tight under additional convexity assumptions. To illustrate the usefulness of the result we provide an application example from the field of noisy linear inverse problems.

[1]  Andrea Montanari,et al.  The LASSO Risk for Gaussian Matrices , 2010, IEEE Transactions on Information Theory.

[2]  R. Vershynin Estimation in High Dimensions: A Geometric Perspective , 2014, 1405.5103.

[3]  丸山 徹 Convex Analysisの二,三の進展について , 1977 .

[4]  Joel A. Tropp,et al.  Living on the edge: phase transitions in convex programs with random data , 2013, 1303.6672.

[5]  Richard G. Baraniuk,et al.  From Denoising to Compressed Sensing , 2014, IEEE Transactions on Information Theory.

[6]  Andrea Montanari,et al.  Applications of the Lindeberg Principle in Communications and Statistical Learning , 2010, IEEE Transactions on Information Theory.

[7]  M. Rudelson,et al.  Sparse reconstruction by convex relaxation: Fourier and Gaussian measurements , 2006, 2006 40th Annual Conference on Information Sciences and Systems.

[8]  Mark E. Johnson Elliptically Contoured Distributions , 2013 .

[9]  Rina Foygel,et al.  Corrupted Sensing: Novel Guarantees for Separating Structured Signals , 2013, IEEE Transactions on Information Theory.

[10]  Andrea Montanari,et al.  Universality in Polytope Phase Transitions and Message Passing Algorithms , 2012, ArXiv.

[11]  Christos Thrampoulidis,et al.  Isotropically random orthogonal matrices: Performance of LASSO and minimum conic singular values , 2015, 2015 IEEE International Symposium on Information Theory (ISIT).

[12]  Y. Gordon Elliptically contoured distributions , 1987 .

[13]  Joel A. Tropp,et al.  The Bowling Scheme , 2014 .

[14]  Christos Thrampoulidis,et al.  The squared-error of generalized LASSO: A precise analysis , 2013, 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[15]  M. Stojnic Various thresholds for $\ell_1$-optimization in compressed sensing , 2009 .

[16]  Joel A. Tropp,et al.  Living on the edge: A geometric theory of phase transitions in convex optimization , 2013, ArXiv.

[17]  M. Talagrand,et al.  Probability in Banach Spaces: Isoperimetry and Processes , 1991 .

[18]  S. R. Searle Linear Models , 1971 .

[19]  M. Stojnic Upper-bounding $\ell_1$-optimization weak thresholds , 2013 .

[20]  Mihailo Stojnic Spherical perceptron as a storage memory with limited errors , 2013 .

[21]  Mihailo Stojnic,et al.  A framework to characterize performance of LASSO algorithms , 2013, ArXiv.

[22]  A. Belloni,et al.  Square-Root Lasso: Pivotal Recovery of Sparse Signals via Conic Programming , 2011 .

[23]  Martin J. Wainwright,et al.  Restricted Eigenvalue Properties for Correlated Gaussian Designs , 2010, J. Mach. Learn. Res..

[24]  Martin J. Wainwright,et al.  A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers , 2009, NIPS.

[25]  Christos Thrampoulidis,et al.  Precise error analysis of the LASSO , 2015, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[26]  Michael A. Saunders,et al.  Atomic Decomposition by Basis Pursuit , 1998, SIAM J. Sci. Comput..

[27]  B. Hassibi,et al.  Recovering Structured Signals in Noise: Least-Squares Meets Compressed Sensing , 2015 .

[28]  Dimitri P. Bertsekas,et al.  Convex Analysis and Optimization , 2003 .

[29]  Roman Vershynin,et al.  Introduction to the non-asymptotic analysis of random matrices , 2010, Compressed Sensing.

[30]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[31]  Terence Tao,et al.  The Dantzig selector: Statistical estimation when P is much larger than n , 2005, math/0506081.

[32]  Mihailo Stojnic,et al.  Various thresholds for ℓ1-optimization in compressed sensing , 2009, ArXiv.

[33]  Andrea Montanari,et al.  The Noise-Sensitivity Phase Transition in Compressed Sensing , 2010, IEEE Transactions on Information Theory.

[34]  Mihailo Stojnic Upper-bounding ℓ1-optimization weak thresholds , 2013, ArXiv.

[35]  Gábor Lugosi,et al.  Concentration Inequalities - A Nonasymptotic Theory of Independence , 2013, Concentration Inequalities.

[36]  Pablo A. Parrilo,et al.  The Convex Geometry of Linear Inverse Problems , 2010, Foundations of Computational Mathematics.

[37]  Dennis Amelunxen,et al.  Gordon's inequality and condition numbers in conic optimization , 2014, 1408.3016.

[38]  Y. Gordon Some inequalities for Gaussian processes and applications , 1985 .

[39]  Andrea Montanari,et al.  The dynamics of message passing on dense graphs, with applications to compressed sensing , 2010, 2010 IEEE International Symposium on Information Theory.

[40]  David L. Donoho,et al.  Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing , 2009, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences.

[41]  R. Gill,et al.  Cox's regression model for counting processes: a large sample study : (preprint) , 1982 .

[42]  W. Newey,et al.  Large sample estimation and hypothesis testing , 1986 .

[43]  John Wright,et al.  Dense Error Correction Via $\ell^1$-Minimization , 2010, IEEE Transactions on Information Theory.

[44]  Z. Bai,et al.  Limit of the smallest eigenvalue of a large dimensional sample covariance matrix , 1993 .

[45]  Lie Wang The L1L1 penalized LAD estimator for high dimensional linear regression , 2013, J. Multivar. Anal..

[46]  Christos Thrampoulidis,et al.  Asymptotically Exact Error Analysis for the Generalized $\ell_2^2$-LASSO , 2015, ISIT 2015.

[47]  Y. Gordon On Milman's inequality and random subspaces which escape through a mesh in ℝ n , 1988 .

[48]  Sergio Verdú,et al.  Optimal Phase Transitions in Compressed Sensing , 2011, IEEE Transactions on Information Theory.

[49]  Emmanuel J. Candès,et al.  Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information , 2004, IEEE Transactions on Information Theory.

[50]  Mihailo Stojnic Bounding ground state energy of Hopfield models , 2013, ArXiv.

[51]  D. L. Donoho,et al.  Compressed sensing , 2006, IEEE Trans. Inf. Theory.

[52]  Stephen P. Boyd,et al.  Convex Optimization , 2004, Algorithms and Theory of Computation Handbook.

[53]  Christos Thrampoulidis,et al.  Simple error bounds for regularized noisy linear inverse problems , 2014, 2014 IEEE International Symposium on Information Theory.

[54]  Mihailo Stojnic,et al.  Meshes that trap random subspaces , 2013, ArXiv.

[55]  Christos Thrampoulidis,et al.  Asymptotically exact error analysis for the generalized equation-LASSO , 2015, 2015 IEEE International Symposium on Information Theory (ISIT).

[56]  P. Bickel,et al.  SIMULTANEOUS ANALYSIS OF LASSO AND DANTZIG SELECTOR , 2008, 0801.1095.

[57]  Christos Thrampoulidis,et al.  Estimating structured signals in sparse noise: A precise noise sensitivity analysis , 2014, 2014 52nd Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[58]  Christos Thrampoulidis,et al.  Simple Bounds for Noisy Linear Inverse Problems with Exact Side Information , 2013, ArXiv.