Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?

Suppose we are given a vector f in a class FsubeRopf<sup>N </sup>, e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr<sub>2</sub>) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|<sub>(n)</sub>lesRmiddotn<sup>-1</sup>p/, where R>0 and p>0. Suppose that we take measurements y<sub>k</sub>=langf<sup># </sup>,X<sub>k</sub>rang,k=1,...,K, where the X<sub>k</sub> are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0<p<1 and with overwhelming probability, our reconstruction f<sup>t</sup>, defined as the solution to the constraints y<sub>k</sub>=langf<sup># </sup>,X<sub>k</sub>rang with minimal lscr<sub>1</sub> norm, obeys parf-f<sup>#</sup>par<sub>lscr2</sub>lesC<sub>p </sub>middotRmiddot(K/logN)<sup>-r</sup>, r=1/p-1/2. There is a sense in which this result is optimal; it is generally impossible to obtain a higher accuracy from any set of K measurements whatsoever. The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes a few randomly sampled Fourier coefficients of f. In fact, the results are quite general and require only two hypotheses on the measurement ensemble which are detailed

[1]  B. Carl Entropy numbers, s-numbers, and eigenvalue problems , 1981 .

[2]  W. B. Johnson,et al.  Extensions of Lipschitz mappings into Hilbert space , 1984 .

[3]  W. Steiger,et al.  Least Absolute Deviations: Theory, Applications and Algorithms , 1984 .

[4]  C. Schütt Entropy numbers of diagonal operators between symmetric Banach spaces , 1984 .

[5]  A. Pinkus n-Widths in Approximation Theory , 1985 .

[6]  J. W. Silverstein The Smallest Eigenvalue of a Large Dimensional Wishart Matrix , 1985 .

[7]  F. Santosa,et al.  Linear inversion of ban limit reflection seismograms , 1986 .

[8]  A. Pajor,et al.  Subspaces of small codimension of finite-dimensional Banach spaces , 1986 .

[9]  D. Donoho,et al.  Uncertainty principles and signal recovery , 1989 .

[10]  J. Lindenstrauss,et al.  Approximation of zonoids by zonotopes , 1989 .

[11]  J. Bourgain Bounded orthogonal systems and the Λ(p)-set problem , 1989 .

[12]  G. Pisier The volume of convex bodies and Banach space geometry , 1989 .

[13]  H. Feichtinger Atomic characterizations of modulation spaces through Gabor-type representations , 1989 .

[14]  D. Burkholder Review: Gilles Pisier, The volume of convex bodies and Banach space geometry , 1991 .

[15]  B. Bollobás THE VOLUME OF CONVEX BODIES AND BANACH SPACE GEOMETRY (Cambridge Tracts in Mathematics 94) , 1991 .

[16]  Stanislaw J. Szarek,et al.  Condition numbers of random matrices , 1991, J. Complex..

[17]  L. Rudin,et al.  Nonlinear total variation based noise removal algorithms , 1992 .

[18]  Z. Bai,et al.  Limit of the smallest eigenvalue of a large dimensional sample covariance matrix , 1993 .

[19]  D. Donoho,et al.  Basis pursuit , 1994, Proceedings of 1994 28th Asilomar Conference on Signals, Systems and Computers.

[20]  J. Bourgain Remarks on Halasz-Montgomery Type Inequalities , 1995 .

[21]  Noga Alon,et al.  The space complexity of approximating the frequency moments , 1996, STOC '96.

[22]  Fadil Santosa,et al.  Recovery of Blocky Images from Noisy and Blurred Data , 1996, SIAM J. Appl. Math..

[23]  K. Ball An elementary introduction to modern convex geometry, in flavors of geometry , 1997 .

[24]  K. Ball An Elementary Introduction to Modern Convex Geometry , 1997 .

[25]  Martin Vetterli,et al.  Data Compression and Harmonic Analysis , 1998, IEEE Trans. Inf. Theory.

[26]  Michael A. Saunders,et al.  Atomic Decomposition by Basis Pursuit , 1998, SIAM J. Sci. Comput..

[27]  R. DeVore,et al.  Nonlinear approximation , 1998, Acta Numerica.

[28]  S. Mallat A wavelet tour of signal processing , 1998 .

[29]  R. DeVore,et al.  Nonlinear Approximation and the Space BV(R2) , 1999 .

[30]  S. Boucheron,et al.  A sharp concentration inequality with applications , 1999, Random Struct. Algorithms.

[31]  S. Boucheron,et al.  A sharp concentration inequality with applications , 1999, Random Struct. Algorithms.

[32]  Thomas Kühn,et al.  A Lower Estimate for Entropy Numbers , 2001, J. Approx. Theory.

[33]  I. Johnstone On the distribution of the largest eigenvalue in principal components analysis , 2001 .

[34]  M. Ledoux The concentration of measure phenomenon , 2001 .

[35]  Xiaoming Huo,et al.  Uncertainty principles and ideal atomic decomposition , 2001, IEEE Trans. Inf. Theory.

[36]  S. Szarek,et al.  Chapter 8 - Local Operator Theory, Random Matrices and Banach Spaces , 2001 .

[37]  Sudipto Guha,et al.  Near-optimal sparse fourier representations via sampling , 2002, STOC '02.

[38]  Michael Elad,et al.  Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1 minimization , 2003, Proceedings of the National Academy of Sciences of the United States of America.

[39]  E. Candès,et al.  Image Reconstruction With Ridgelets , 2003 .

[40]  Rémi Gribonval,et al.  Sparse representations in unions of bases , 2003, IEEE Trans. Inf. Theory.

[41]  Jean-Jacques Fuchs,et al.  On sparse representations in arbitrary redundant bases , 2004, IEEE Transactions on Information Theory.

[42]  E. Candès,et al.  New tight frames of curvelets and optimal representations of objects with piecewise C2 singularities , 2004 .

[43]  Emmanuel J. Candès,et al.  Decoding by linear programming , 2005, IEEE Transactions on Information Theory.

[44]  M. Rudelson,et al.  Smallest singular value of random matrices and geometry of random polytopes , 2005 .

[45]  D. Donoho For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution , 2006 .

[46]  Emmanuel J. Candès,et al.  Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information , 2004, IEEE Transactions on Information Theory.

[47]  Emmanuel J. Candès,et al.  Quantitative Robust Uncertainty Principles and Optimally Sparse Decompositions , 2004, Found. Comput. Math..

[48]  M. Rudelson,et al.  Sparse reconstruction by convex relaxation: Fourier and Gaussian measurements , 2006, 2006 40th Annual Conference on Information Sciences and Systems.

[49]  R. DeVore,et al.  Nonlinear approximation and the space BV[inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="01i" /] , 1999 .