Can Entropic Regularization Be Replaced by Squared Euclidean Distance Plus Additional Linear Constraints
暂无分享,去创建一个
[1] Manfred K. Warmuth,et al. The Perceptron Algorithm Versus Winnow: Linear Versus Logarithmic Mistake Bounds when Few Input Variables are Relevant (Technical Note) , 1997, Artif. Intell..
[2] S. V. N. Vishwanathan,et al. Leaving the Span , 2005, COLT.
[3] Manfred K. Warmuth,et al. The perceptron algorithm vs. Winnow: linear vs. logarithmic mistake bounds when few input variables are relevant , 1995, COLT '95.
[4] Manfred K. Warmuth,et al. Exponentiated Gradient Versus Gradient Descent for Linear Predictors , 1997, Inf. Comput..
[5] Rocco A. Servedio,et al. Efficiency versus Convergence of Boolean Kernels for On-Line Learning Algorithms , 2001, NIPS.
[6] Manfred K. Warmuth,et al. Additive versus exponentiated gradient updates for linear prediction , 1995, STOC '95.
[7] Philip M. Long,et al. Mistake Bounds for Maximum Entropy Discrimination , 2004, NIPS.
[8] N. Littlestone. Learning Quickly When Irrelevant Attributes Abound: A New Linear-Threshold Algorithm , 1987, 28th Annual Symposium on Foundations of Computer Science (sfcs 1987).