Asymptotically Exact Denoising in Relation to Compressed Sensing
暂无分享,去创建一个
[1] J. Moreau. Fonctions convexes duales et points proximaux dans un espace hilbertien , 1962 .
[2] R. Rockafellar. Monotone Operators and the Proximal Point Algorithm , 1976 .
[3] 丸山 徹. Convex Analysisの二,三の進展について , 1977 .
[4] Y. Gordon. On Milman's inequality and random subspaces which escape through a mesh in ℝ n , 1988 .
[5] Osman Güer. On the convergence of the proximal point algorithm for convex minimization , 1991 .
[6] M. Talagrand,et al. Probability in Banach Spaces: Isoperimetry and Processes , 1991 .
[7] L. Rudin,et al. Nonlinear total variation based noise removal algorithms , 1992 .
[8] David L. Donoho,et al. De-noising by soft-thresholding , 1995, IEEE Trans. Inf. Theory.
[9] Scott Chen,et al. Examples of basis pursuit , 1995, Optics + Photonics.
[10] I. Johnstone,et al. Adapting to Unknown Smoothness via Wavelet Shrinkage , 1995 .
[11] R. Tibshirani. Regression Shrinkage and Selection via the Lasso , 1996 .
[12] Michael A. Saunders,et al. Atomic Decomposition by Basis Pursuit , 1998, SIAM J. Sci. Comput..
[13] Stephen P. Boyd,et al. Applications of second-order cone programming , 1998 .
[14] R. Rockafellar. SECOND-ORDER CONVEX ANALYSIS , 1999 .
[15] Jos F. Sturm,et al. A Matlab toolbox for optimization over symmetric cones , 1999 .
[16] D. Donoho,et al. Atomic Decomposition by Basis Pursuit , 2001 .
[17] M. Ledoux. The concentration of measure phenomenon , 2001 .
[18] Xiaoming Huo,et al. Uncertainty principles and ideal atomic decomposition , 2001, IEEE Trans. Inf. Theory.
[19] Robert D. Nowak,et al. An EM algorithm for wavelet-based image restoration , 2003, IEEE Trans. Image Process..
[20] Stephen L. Keeling,et al. Total variation based convex filters for medical imaging , 2003, Appl. Math. Comput..
[21] E. Candès,et al. Astronomical image representation by the curvelet transform , 2003, Astronomy & Astrophysics.
[22] Dimitri P. Bertsekas,et al. Convex Analysis and Optimization , 2003 .
[23] Yurii Nesterov,et al. Introductory Lectures on Convex Optimization - A Basic Course , 2014, Applied Optimization.
[24] Patrick L. Combettes,et al. Signal Recovery by Proximal Forward-Backward Splitting , 2005, Multiscale Model. Simul..
[25] E. Candès,et al. Stable signal recovery from incomplete and inaccurate measurements , 2005, math/0503066.
[26] D. Donoho,et al. Neighborliness of randomly projected simplices in high dimensions. , 2005, Proceedings of the National Academy of Sciences of the United States of America.
[27] D. Donoho,et al. Simultaneous cartoon and texture image inpainting using morphological component analysis (MCA) , 2005 .
[28] Stephen P. Boyd,et al. Convex Optimization , 2004, Algorithms and Theory of Computation Handbook.
[29] Emmanuel J. Candès,et al. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information , 2004, IEEE Transactions on Information Theory.
[30] Emmanuel J. Candès,et al. Quantitative Robust Uncertainty Principles and Optimally Sparse Decompositions , 2004, Found. Comput. Math..
[31] David L. Donoho,et al. High-Dimensional Centrally Symmetric Polytopes with Neighborliness Proportional to Dimension , 2006, Discret. Comput. Geom..
[32] D. Donoho,et al. Thresholds for the Recovery of Sparse Solutions via L1 Minimization , 2006, 2006 40th Annual Conference on Information Sciences and Systems.
[33] Terence Tao,et al. The Dantzig selector: Statistical estimation when P is much larger than n , 2005, math/0506081.
[34] Stephen P. Boyd,et al. Enhancing Sparsity by Reweighted ℓ1 Minimization , 2007, 0711.1612.
[35] A. Tsybakov,et al. Sparsity oracle inequalities for the Lasso , 2007, 0705.3308.
[36] Weiyu Xu,et al. Necessary and sufficient conditions for success of the nuclear norm heuristic for rank minimization , 2008, 2008 47th IEEE Conference on Decision and Control.
[37] David L. Donoho,et al. Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing , 2009, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences.
[38] N. Meinshausen,et al. LASSO-TYPE RECOVERY OF SPARSE REPRESENTATIONS FOR HIGH-DIMENSIONAL DATA , 2008, 0806.0145.
[39] Mihailo Stojnic,et al. Various thresholds for ℓ1-optimization in compressed sensing , 2009, ArXiv.
[40] Mihailo Stojnic,et al. Block-length dependent thresholds in block-sparse compressed sensing , 2009, ArXiv.
[41] Emmanuel J. Candès,et al. Exact Matrix Completion via Convex Optimization , 2009, Found. Comput. Math..
[42] Alexandros G. Dimakis,et al. Sparse Recovery of Positive Signals with Minimal Expansion , 2009, ArXiv.
[43] M. Stojnic. Various thresholds for $\ell_1$-optimization in compressed sensing , 2009 .
[44] Andrea Montanari,et al. Message-passing algorithms for compressed sensing , 2009, Proceedings of the National Academy of Sciences.
[45] Martin J. Wainwright,et al. A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers , 2009, NIPS.
[46] S. Yun,et al. An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems , 2009 .
[47] Babak Hassibi,et al. On the Reconstruction of Block-Sparse Signals With an Optimal Number of Measurements , 2008, IEEE Transactions on Signal Processing.
[48] S. Yun,et al. An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems , 2009 .
[49] Weiyu Xu,et al. Weighted ℓ1 minimization for sparse recovery with prior information , 2009, 2009 IEEE International Symposium on Information Theory.
[50] Jian-Feng Cai,et al. Linearized Bregman iterations for compressed sensing , 2009, Math. Comput..
[51] P. Bickel,et al. SIMULTANEOUS ANALYSIS OF LASSO AND DANTZIG SELECTOR , 2008, 0801.1095.
[52] Julien Mairal,et al. Proximal Methods for Sparse Hierarchical Dictionary Learning , 2010, ICML.
[53] Emmanuel J. Candès,et al. A Singular Value Thresholding Algorithm for Matrix Completion , 2008, SIAM J. Optim..
[54] Pablo A. Parrilo,et al. Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization , 2007, SIAM Rev..
[55] Babak Hassibi,et al. New Null Space Results and Recovery Thresholds for Matrix Rank Minimization , 2010, ArXiv.
[56] Wei Lu,et al. Modified-CS: Modifying compressive sensing for problems with partially known support , 2009, 2009 IEEE International Symposium on Information Theory.
[57] Francis R. Bach,et al. Structured sparsity-inducing norms through submodular functions , 2010, NIPS.
[58] John Wright,et al. RASL: Robust alignment by sparse and low-rank decomposition for linearly correlated images , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
[59] V. Koltchinskii,et al. Nuclear norm penalization and optimal rates for noisy low rank matrix completion , 2010, 1011.6256.
[60] Andrea Montanari,et al. The dynamics of message passing on dense graphs, with applications to compressed sensing , 2010, 2010 IEEE International Symposium on Information Theory.
[61] Andrea Montanari,et al. Analysis of approximate message passing algorithm , 2010, 2010 44th Annual Conference on Information Sciences and Systems (CISS).
[62] Yonina C. Eldar,et al. Block-Sparse Signals: Uncertainty Relations and Efficient Recovery , 2009, IEEE Transactions on Signal Processing.
[63] Xiaodong Li,et al. Stable Principal Component Pursuit , 2010, 2010 IEEE International Symposium on Information Theory.
[64] Emmanuel J. Candès,et al. The Power of Convex Relaxation: Near-Optimal Matrix Completion , 2009, IEEE Transactions on Information Theory.
[65] Emmanuel J. Candès,et al. Tight oracle bounds for low-rank matrix recovery from a minimal number of random measurements , 2010, ArXiv.
[66] Volkan Cevher,et al. Model-Based Compressive Sensing , 2008, IEEE Transactions on Information Theory.
[67] A. Willsky,et al. Latent variable graphical model selection via convex optimization , 2010 .
[68] Emmanuel J. Candès,et al. Tight Oracle Inequalities for Low-Rank Matrix Recovery From a Minimal Number of Noisy Random Measurements , 2011, IEEE Transactions on Information Theory.
[69] Martin J. Wainwright,et al. Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions , 2011, ICML.
[70] Gongguo Tang,et al. Atomic Norm Denoising With Applications to Line Spectral Estimation , 2012, IEEE Transactions on Signal Processing.
[71] Alexandros G. Dimakis,et al. Sparse Recovery of Nonnegative Signals With Minimal Expansion , 2011, IEEE Transactions on Signal Processing.
[72] Shiqian Ma,et al. Fixed point and Bregman iterative methods for matrix rank minimization , 2009, Math. Program..
[73] Andrea Montanari,et al. The Noise-Sensitivity Phase Transition in Compressed Sensing , 2010, IEEE Transactions on Information Theory.
[74] Emmanuel J. Candès,et al. How well can we estimate a sparse vector? , 2011, ArXiv.
[75] Robert D. Nowak,et al. Tight Measurement Bounds for Exact Recovery of Structured Sparse Signals , 2011, ArXiv.
[76] Patrick L. Combettes,et al. Proximal Splitting Methods in Signal Processing , 2009, Fixed-Point Algorithms for Inverse Problems in Science and Engineering.
[77] Yi Ma,et al. Robust principal component analysis? , 2009, JACM.
[78] Emmanuel J. Candès,et al. Simple Bounds for Low-complexity Model Reconstruction , 2011, ArXiv.
[79] Xiaoming Yuan,et al. Recovering Low-Rank and Sparse Components of Matrices from Incomplete and Noisy Observations , 2011, SIAM J. Optim..
[80] Pablo A. Parrilo,et al. Rank-Sparsity Incoherence for Matrix Decomposition , 2009, SIAM J. Optim..
[81] A. Belloni,et al. Square-Root Lasso: Pivotal Recovery of Sparse Signals via Conic Programming , 2011 .
[82] Babak Hassibi,et al. Tight recovery thresholds and robustness analysis for nuclear norm minimization , 2011, 2011 IEEE International Symposium on Information Theory Proceedings.
[83] Andrea Montanari,et al. The LASSO Risk for Gaussian Matrices , 2010, IEEE Transactions on Information Theory.
[84] Nicolas Vayatis,et al. Estimation of Simultaneously Sparse and Low Rank Matrices , 2012, ICML.
[85] Andrea Montanari,et al. Universality in Polytope Phase Transitions and Message Passing Algorithms , 2012, ArXiv.
[86] Babak Hassibi,et al. On a relation between the minimax risk and the phase transitions of compressed recovery , 2012, 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton).
[87] Pablo A. Parrilo,et al. The Convex Geometry of Linear Inverse Problems , 2010, Foundations of Computational Mathematics.
[88] John Wright,et al. Compressive principal component pursuit , 2012, 2012 IEEE International Symposium on Information Theory Proceedings.
[89] Babak Hassibi,et al. Recovery threshold for optimal weight ℓ1 minimization , 2012, 2012 IEEE International Symposium on Information Theory Proceedings.
[90] Michael A. Saunders,et al. Proximal Newton-type Methods for Minimizing Convex Objective Functions in Composite Form , 2012, NIPS 2012.
[91] Joel A. Tropp,et al. Sharp recovery bounds for convex deconvolution, with applications , 2012, ArXiv.
[92] Mohamed-Jalal Fadili,et al. A quasi-Newton proximal splitting method , 2012, NIPS.
[93] Michael I. Jordan,et al. Computational and statistical tradeoffs via convex relaxation , 2012, Proceedings of the National Academy of Sciences.
[94] Francis R. Bach,et al. Intersecting singularities for multi-structured estimation , 2013, ICML.
[95] Andrew B. Nobel,et al. Reconstruction of a low-rank matrix in the presence of Gaussian noise , 2010, J. Multivar. Anal..
[96] Christos Thrampoulidis,et al. The squared-error of generalized LASSO: A precise analysis , 2013, 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton).
[97] Guarantees of total variation minimization for signal recovery , 2013, 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton).
[98] Andrea Montanari,et al. The phase transition of matrix recovery from Gaussian measurements matches the minimax MSE of matrix denoising , 2013, Proceedings of the National Academy of Sciences.
[99] Joel A. Tropp,et al. Living on the edge: A geometric theory of phase transitions in convex optimization , 2013, ArXiv.
[100] Mihailo Stojnic,et al. A framework to characterize performance of LASSO algorithms , 2013, ArXiv.
[101] Mihailo Stojnic,et al. A performance analysis framework for SOCP algorithms in noisy compressed sensing , 2013, ArXiv.
[102] Emmanuel J. Candès,et al. Simple bounds for recovering low-complexity models , 2011, Math. Program..
[103] Deanna Needell,et al. Stable Image Reconstruction Using Total Variation Minimization , 2012, SIAM J. Imaging Sci..
[104] Richard G. Baraniuk,et al. Asymptotic Analysis of Complex LASSO via Complex Approximate Message Passing (CAMP) , 2011, IEEE Transactions on Information Theory.
[105] Andrea Montanari,et al. Accurate Prediction of Phase Transitions in Compressed Sensing via a Connection to Minimax Denoising , 2011, IEEE Transactions on Information Theory.
[106] Joel A. Tropp,et al. Living on the edge: phase transitions in convex programs with random data , 2013, 1303.6672.
[107] D. L. Donoho,et al. Compressed sensing , 2006, IEEE Trans. Inf. Theory.
[108] Vidyashankar Sivakumar,et al. Estimation with Norm Regularization , 2014, NIPS.
[109] Michael A. Saunders,et al. Proximal Newton-Type Methods for Minimizing Composite Functions , 2012, SIAM J. Optim..
[110] Tengyuan Liang,et al. Geometrizing Local Rates of Convergence for High-Dimensional Linear Inverse Problems , 2014 .
[111] D. Donoho,et al. Minimax risk of matrix denoising by singular value thresholding , 2013, 1304.2085.
[112] Rina Foygel,et al. Corrupted Sensing: Novel Guarantees for Separating Structured Signals , 2013, IEEE Transactions on Information Theory.
[113] Stephen P. Boyd,et al. Proximal Algorithms , 2013, Found. Trends Optim..
[114] Joel A. Tropp,et al. Sharp Recovery Bounds for Convex Demixing, with Applications , 2012, Found. Comput. Math..
[115] Christos Thrampoulidis,et al. Asymptotically Exact Error Analysis for the Generalized $\ell_2^2$-LASSO , 2015, ISIT 2015.
[116] Yonina C. Eldar,et al. Simultaneously Structured Models With Application to Sparse and Low-Rank Matrices , 2012, IEEE Transactions on Information Theory.
[117] D. Donoho,et al. Minimax risk over / p-balls for / q-error , 2022 .