The{dollar}p{dollar}-Norm Generalization of the LMS Algorithm for Adaptive Filtering
暂无分享,去创建一个
[1] A. Hall,et al. Adaptive Switching Circuits , 2016 .
[2] Babak Hassibi,et al. The p-norm generalization of the LMS algorithm for adaptive filtering , 2003, IEEE Transactions on Signal Processing.
[3] S. V. N. Vishwanathan,et al. Leaving the Span , 2005, COLT.
[4] A. Ng. Feature selection, L1 vs. L2 regularization, and rotational invariance , 2004, Twenty-first international conference on Machine learning - ICML '04.
[5] Claudio Gentile,et al. On the generalization ability of on-line learning algorithms , 2001, IEEE Transactions on Information Theory.
[6] Manfred K. Warmuth,et al. Relative Loss Bounds for On-Line Density Estimation with the Exponential Family of Distributions , 1999, Machine Learning.
[7] Manfred K. Warmuth,et al. Relative Loss Bounds for Multidimensional Regression Problems , 1997, Machine Learning.
[8] Manfred K. Warmuth,et al. On the Worst-Case Analysis of Temporal-Difference Learning Algorithms , 2005, Machine Learning.
[9] Manfred K. Warmuth,et al. Path Kernels and Multiplicative Updates , 2002, J. Mach. Learn. Res..
[10] Claudio Gentile,et al. Adaptive and Self-Confident On-Line Learning Algorithms , 2000, J. Comput. Syst. Sci..
[11] Mark Herbster,et al. Tracking the Best Linear Predictor , 2001, J. Mach. Learn. Res..
[12] Tareq Y. Al-Naffouri,et al. On the selection of optimal nonlinearities for stochastic gradient adaptive algorithms , 2000, 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.00CH37100).
[13] Manfred K. Warmuth,et al. Relative loss bounds for single neurons , 1999, IEEE Trans. Neural Networks.
[14] Claudio Gentile,et al. The Robustness of the p-Norm Algorithms , 1999, COLT '99.
[15] Dale Schuurmans,et al. General Convergence Results for Linear Discriminant Updates , 1997, COLT '97.
[16] Ali H. Sayed,et al. H∞ optimality of the LMS algorithm , 1996, IEEE Trans. Signal Process..
[17] Peter Auer,et al. Exponentially many local minima for single neurons , 1995, NIPS.
[18] Manfred K. Warmuth,et al. The perceptron algorithm vs. Winnow: linear vs. logarithmic mistake bounds when few input variables are relevant , 1995, COLT '95.
[19] Manfred K. Warmuth,et al. Additive versus exponentiated gradient updates for linear prediction , 1995, STOC '95.
[20] Philip M. Long,et al. WORST-CASE QUADRATIC LOSS BOUNDS FOR ON-LINE PREDICTION OF LINEAR FUNCTIONS BY GRADIENT DESCENT , 1993 .
[21] N. Littlestone. Learning Quickly When Irrelevant Attributes Abound: A New Linear-Threshold Algorithm , 1987, 28th Annual Symposium on Foundations of Computer Science (sfcs 1987).
[22] L. Bregman. The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming , 1967 .