暂无分享,去创建一个
[1] M. Teboulle,et al. A geometric property of the least squares solution of linear equations , 1990 .
[2] Luis Rademacher,et al. Efficient Volume Sampling for Row/Column Subset Selection , 2010, 2010 IEEE 51st Annual Symposium on Foundations of Computer Science.
[3] Chih-Jen Lin,et al. LIBSVM: A library for support vector machines , 2011, TIST.
[4] Manfred K. Warmuth,et al. Unbiased estimates for linear regression via volume sampling , 2017, NIPS.
[5] Bernard Chazelle,et al. The Fast Johnson--Lindenstrauss Transform and Approximate Nearest Neighbors , 2009, SIAM J. Comput..
[6] M. Chao,et al. Negative Moments of Positive Random Variables , 1972 .
[7] Ben Taskar,et al. k-DPPs: Fixed-Size Determinantal Point Processes , 2011, ICML.
[8] Ben Taskar,et al. Determinantal Point Processes for Machine Learning , 2012, Found. Trends Mach. Learn..
[9] Nisheeth K. Vishnoi,et al. Fair and Diverse DPP-based Data Summarization , 2018, ICML.
[10] Eungchun Cho,et al. INNER PRODUCT OF RANDOM VECTORS ON n S , 2013 .
[11] Aarti Singh,et al. On Computationally Tractable Selection of Experiments in Measurement-Constrained Regression Models , 2016, J. Mach. Learn. Res..
[12] Mohit Singh,et al. Proportional Volume Sampling and Approximation Algorithms for A-Optimal Design , 2018, SODA.
[14] Manfred K. Warmuth,et al. Subsampling for Ridge Regression via Regularized Volume Sampling , 2017, AISTATS.
[15] Daniele Calandriello,et al. Exact sampling of determinantal point processes with sublinear time preprocessing , 2019, NeurIPS.
[16] David P. Woodruff,et al. Fast approximation of matrix coherence and statistical leverage , 2011, ICML.
[17] Manfred K. Warmuth,et al. Reverse iterative volume sampling for linear regression , 2018, J. Mach. Learn. Res..
[18] Venkatesan Guruswami,et al. Optimal column-based low-rank matrix reconstruction , 2011, SODA.
[19] S. Muthukrishnan,et al. Sampling algorithms for l2 regression and applications , 2006, SODA '06.
[20] Yin Tat Lee,et al. Constructing Linear-Sized Spectral Sparsification in Almost-Linear Time , 2015, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science.
[21] Manfred K. Warmuth,et al. Correcting the bias in least squares regression with volume-rescaled sampling , 2018, AISTATS.
[22] Christos Boutsidis,et al. Faster Subset Selection for Matrices and Applications , 2011, SIAM J. Matrix Anal. Appl..
[23] Rémi Bardenet,et al. On a few statistical applications of determinantal point processes , 2017 .
[24] Santosh S. Vempala,et al. Matrix approximation and projective clustering via volume sampling , 2006, SODA '06.
[25] S. Mitra. A density-free approach to the matrix variate beta distribution , 1970 .
[26] Michal Derezinski,et al. Fast determinantal point processes via distortion-free intermediate sampling , 2018, COLT.
[27] H. R. Vaart. A Note on Wilks' Internal Scatter , 1965 .
[28] L. Kantorovich,et al. Functional analysis and applied mathematics , 1963 .
[29] Suvrit Sra,et al. Polynomial time algorithms for dual volume sampling , 2017, NIPS.
[30] Suvrit Sra,et al. Elementary Symmetric Polynomials for Optimal Experimental Design , 2017, NIPS.
[31] Joel A. Tropp,et al. User-Friendly Tail Bounds for Sums of Random Matrices , 2010, Found. Comput. Math..
[32] Manfred K. Warmuth,et al. Leveraged volume sampling for linear regression , 2018, NeurIPS.
[33] Eric Price,et al. Active Regression via Linear-Sample Sparsification , 2017, COLT.
[34] Y. Peres,et al. Determinantal Processes and Independence , 2005, math/0503110.
[35] Ulrich Paquet,et al. Bayesian Low-Rank Determinantal Point Processes , 2016, RecSys.
[36] Nikhil Srivastava,et al. Twice-ramanujan sparsifiers , 2008, STOC '09.