On the Convergence of Adam and Beyond

Several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as RMSProp, Adam, Adadelta, Nadam are based on using gradient updates scaled by square roots of exponential moving averages of squared past gradients. In many applications, e.g. learning with large output spaces, it has been empirically observed that these algorithms fail to converge to an optimal solution (or a critical point in nonconvex settings). We show that one cause for such failures is the exponential moving average used in the algorithms. We provide an explicit example of a simple convex optimization setting where Adam does not converge to the optimal solution, and describe the precise problems with the previous analysis of Adam algorithm. Our analysis suggests that the convergence issues can be fixed by endowing such algorithms with `long-term memory' of past gradients, and propose new variants of the Adam algorithm which not only fix the convergence issues but often also lead to improved empirical performance.

[1]  Claudio Gentile, Nicolò Cesa-Bianchi, Peter Auer, Adaptive and Self-Confident On-Line Learning Algorithms , 2002, J. Comput. Syst. Sci..

[2]  Matthew J. Streeter, H. Brendan McMahan, Adaptive Bound Optimization for Online Convex Optimization , 2010, COLT.

[3]  Yoram Singer, Elad Hazan, John C. Duchi, Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..

[4]  Timothy Dozat, Incorporating Nesterov Momentum into Adam , 2016 .

[5]  Geoffrey E. Hinton, Ilya Sutskever, Alex Krizhevsky, ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[6]  Martin Zinkevich, Online Convex Programming and Generalized Infinitesimal Gradient Ascent , 2003, ICML.

[7]  Matthew D. Zeiler, ADADELTA: An Adaptive Learning Rate Method , 2012, ArXiv.

[8]  Jimmy Ba, Diederik P. Kingma, Adam: A Method for Stochastic Optimization , 2015, ICLR.

[9]  Nitish Srivastava, Geoffrey E. Hinton, Ruslan Salakhutdinov, Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..

[10]  Claudio Gentile, Nicolò Cesa-Bianchi, Alex Conconi, On the generalization ability of on-line learning algorithms , 2004, IEEE Transactions on Information Theory.