暂无分享,去创建一个
[1] Alan M. Frieze,et al. Fast Monte-Carlo algorithms for finding low-rank approximations , 1998, Proceedings 39th Annual Symposium on Foundations of Computer Science (Cat. No.98CB36280).
[2] Katya Scheinberg,et al. Efficient SVM Training Using Low-Rank Kernel Representations , 2002, J. Mach. Learn. Res..
[3] Ziv Bar-Yossef,et al. Sampling lower bounds via information theory , 2003, STOC '03.
[4] Nello Cristianini,et al. Kernel Methods for Pattern Analysis , 2006 .
[5] Petros Drineas,et al. On the Nyström Method for Approximating a Gram Matrix for Improved Kernel-Based Learning , 2005, J. Mach. Learn. Res..
[6] Michael I. Jordan,et al. Predictive low-rank decomposition for kernel methods , 2005, ICML.
[7] A. Atiya,et al. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond , 2005, IEEE Transactions on Neural Networks.
[8] Benjamin Recht,et al. Random Features for Large-Scale Kernel Machines , 2007, NIPS.
[9] Claudio Gentile,et al. Tracking the best hyperplane with a simple budget Perceptron , 2006, Machine Learning.
[10] Y. Yao,et al. On Early Stopping in Gradient Descent Learning , 2007 .
[11] Benjamin Recht,et al. Weighted Sums of Random Kitchen Sinks: Replacing minimization with randomization in learning , 2008, NIPS.
[12] Nathan Srebro,et al. Fast Rates for Regularized Objectives , 2008, NIPS.
[13] Ambuj Tewari,et al. On the Complexity of Linear Prediction: Risk Bounds, Margin Bounds, and Regularization , 2008, NIPS.
[14] Yoram Singer,et al. The Forgetron: A Kernel-Based Perceptron on a Budget , 2008, SIAM J. Comput..
[15] Ameet Talwalkar,et al. Sampling Techniques for the Nystrom Method , 2009, AISTATS.
[16] Petros Drineas,et al. CUR matrix decompositions for improved data analysis , 2009, Proceedings of the National Academy of Sciences.
[17] Ameet Talwalkar,et al. On the Impact of Kernel Approximation on Learning Accuracy , 2010, AISTATS.
[18] Martin J. Wainwright,et al. Early stopping for non-parametric regression: An optimal data-dependent stopping rule , 2011, 2011 49th Annual Allerton Conference on Communication, Control, and Computing (Allerton).
[19] Rong Jin,et al. Nyström Method vs Random Fourier Features: A Theoretical and Empirical Comparison , 2012, NIPS.
[20] Nathan Srebro,et al. The Kernelized Stochastic Batch Perceptron , 2012, ICML.
[21] Nathan Srebro,et al. Learning Optimally Sparse Support Vector Machines , 2013, ICML.
[22] Martin J. Wainwright,et al. Divide and Conquer Kernel Ridge Regression , 2013, COLT.
[23] Changshui Zhang,et al. On the Sample Complexity of Random Fourier Features for Online Learning , 2014, ACM Trans. Knowl. Discov. Data.
[24] Francis R. Bach,et al. Sharp analysis of low-rank kernel matrix approximations , 2012, COLT.
[25] Michael W. Mahoney,et al. Fast Randomized Kernel Methods With Statistical Guarantees , 2014, ArXiv.
[26] Le Song,et al. Scalable Kernel Methods via Doubly Stochastic Gradients , 2014, NIPS.
[27] Inderjit S. Dhillon,et al. A Divide-and-Conquer Solver for Kernel Support Vector Machines , 2013, ICML.