A Characterization of Stochastic Mirror Descent Algorithms and Their Convergence Properties
暂无分享,去创建一个
[1] Claudio Gentile,et al. The Robustness of the p-Norm Algorithms , 1999, COLT '99.
[2] Nathan Srebro,et al. Characterizing Implicit Bias in Terms of Optimization Geometry , 2018, ICML.
[3] Thomas Kailath,et al. Optimal Training Algorithms and their Relation to Backpropagation , 1994, NIPS.
[4] Ali H. Sayed,et al. H∞ optimality of the LMS algorithm , 1996, IEEE Trans. Signal Process..
[5] Babak Hassibi,et al. Indefinite-Quadratic Estimation And Control , 1987 .
[6] Babak Hassibi,et al. Stochastic Gradient/Mirror Descent: Minimax Optimality and Implicit Regularization , 2018, ICLR.
[7] Dale Schuurmans,et al. General Convergence Results for Linear Discriminant Updates , 1997, COLT '97.
[8] Nicolò Cesa-Bianchi,et al. Mirror Descent Meets Fixed Share (and feels no regret) , 2012, NIPS.
[9] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[10] Thomas Kailath,et al. Hoo Optimality Criteria for LMS and Backpropagation , 1993, NIPS 1993.
[11] Stephen P. Boyd,et al. Stochastic Mirror Descent in Variationally Coherent Optimization Problems , 2017, NIPS.
[12] John Darzentas,et al. Problem Complexity and Method Efficiency in Optimization , 1983 .
[13] Marc Teboulle,et al. Mirror descent and nonlinear projected subgradient methods for convex optimization , 2003, Oper. Res. Lett..
[14] D. Simon. Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches , 2006 .
[15] Babak Hassibi,et al. The p-norm generalization of the LMS algorithm for adaptive filtering , 2003, IEEE Transactions on Signal Processing.
[16] T. Kailath,et al. Indefinite-quadratic estimation and control: a unified approach to H 2 and H ∞ theories , 1999 .