GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium
暂无分享,去创建一个
Sepp Hochreiter | Martin Heusel | Bernhard Nessler | Thomas Unterthiner | Hubert Ramsauer | S. Hochreiter | Thomas Unterthiner | Martin Heusel | Hubert Ramsauer | Bernhard Nessler | M. Heusel | Sepp Hochreiter
[1] Boris Polyak. Some methods of speeding up the convergence of iteration methods , 1964 .
[2] Harold J. Kushner,et al. wchastic. approximation methods for constrained and unconstrained systems , 1978 .
[3] D. Dowson,et al. The Fréchet distance between multivariate normal distributions , 1982 .
[4] Y. Nesterov. A method for solving the convex programming problem with convergence rate O(1/k^2) , 1983 .
[5] Morris W. Hirsch,et al. Convergent activation dynamics in continuous time networks , 1989, Neural Networks.
[6] V. Borkar. Stochastic approximation with two time scales , 1997 .
[7] Jürgen Schmidhuber,et al. Flat Minima , 1997, Neural Computation.
[8] John N. Tsitsiklis,et al. Actor-Critic Algorithms , 1999, NIPS.
[9] Vivek S. Borkar,et al. Actor-Critic - Type Learning Algorithms for Markov Decision Processes , 1999, SIAM J. Control. Optim..
[10] Sean P. Meyn,et al. The O.D.E. Method for Convergence of Stochastic Approximation and Reinforcement Learning , 2000, SIAM J. Control. Optim..
[11] John N. Tsitsiklis,et al. Gradient Convergence in Gradient methods with Errors , 1999, SIAM J. Optim..
[12] H. Attouch,et al. THE HEAVY BALL WITH FRICTION METHOD, I. THE CONTINUOUS DYNAMICAL SYSTEM: GLOBAL EXPLORATION OF THE LOCAL MINIMA OF A REAL-VALUED FUNCTION BY ASYMPTOTIC ANALYSIS OF A DISSIPATIVE DYNAMICAL SYSTEM , 2000 .
[13] John N. Tsitsiklis,et al. Linear stochastic approximation driven by slowly varying Markov chains , 2003, Syst. Control. Lett..
[14] V. Tadić. Almost sure convergence of two time-scale stochastic approximation algorithms , 2004, Proceedings of the 2004 American Control Conference.
[15] J. Tsitsiklis,et al. Convergence rate of linear two-time-scale stochastic approximation , 2004, math/0405287.
[16] A. Mokkadem,et al. Convergence rate and averaging of nonlinear two-time-scale stochastic approximation algorithms , 2006, math/0610329.
[17] Junshan Zhang,et al. The Impact of Stochastic Noisy Feedback on Distributed Network Utility Maximization , 2007, IEEE Transactions on Information Theory.
[18] X. Goudou,et al. The gradient and heavy ball with friction dynamical systems: the quasiconvex case , 2008, Math. Program..
[19] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[20] Ron Meir,et al. A Convergent Online Single Time Scale Actor Critic Algorithm , 2009, J. Mach. Learn. Res..
[21] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[22] Shalabh Bhatnagar,et al. Stochastic Recursive Algorithms for Optimization , 2012 .
[23] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[24] Andrew L. Maas. Rectifier Nonlinearities Improve Neural Network Acoustic Models , 2013 .
[25] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[26] Thorsten Brants,et al. One billion word benchmark for measuring progress in statistical language modeling , 2013, INTERSPEECH.
[27] S. Bhatnagar,et al. Two Timescale Stochastic Approximation with Controlled Markov noise , 2015, ArXiv.
[28] Shalabh Bhatnagar,et al. Two-Timescale Algorithms for Learning Nash Equilibria in General-Sum Stochastic Games , 2015, AAMAS.
[29] S. Bhatnagar,et al. Stochastic recursive inclusion in two timescales with an application to the Lagrangian dual problem , 2015 .
[30] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[31] Yinda Zhang,et al. LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop , 2015, ArXiv.
[32] Xiaogang Wang,et al. Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[33] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[34] Ian J. Goodfellow,et al. On distinguishability criteria for estimating generative models , 2014, ICLR.
[35] Matthias Bethge,et al. A note on the evaluation of generative models , 2015, ICLR.
[36] S. Gadat,et al. Stochastic Heavy ball , 2016, 1609.04228.
[37] Yuan Yu,et al. TensorFlow: A system for large-scale machine learning , 2016, OSDI.
[38] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[39] Soumith Chintala,et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.
[40] S. Bhatnagar,et al. Dynamics of stochastic approximation with Markov iterate-dependent noise with the stability of the iterates not ensured , 2016 .
[41] Martín Abadi,et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems , 2016, ArXiv.
[42] Sepp Hochreiter,et al. Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) , 2015, ICLR.
[43] Yoshua Bengio,et al. Boundary-Seeking Generative Adversarial Networks , 2017, ICLR 2017.
[44] J. Zico Kolter,et al. Gradient descent GAN optimization is locally stable , 2017, NIPS.
[45] Ian J. Goodfellow,et al. NIPS 2016 Tutorial: Generative Adversarial Networks , 2016, ArXiv.
[46] Léon Bottou,et al. Wasserstein GAN , 2017, ArXiv.
[47] Sebastian Nowozin,et al. The Numerics of GANs , 2017, NIPS.
[48] David Pfau,et al. Unrolled Generative Adversarial Networks , 2016, ICLR.
[49] Jae Hyun Lim,et al. Geometric GAN , 2017, ArXiv.
[50] Yingyu Liang,et al. Generalization and Equilibrium in Generative Adversarial Nets (GANs) , 2017, ICML.
[51] Ruslan Salakhutdinov,et al. On the Quantitative Analysis of Decoder-Based Generative Models , 2016, ICLR.
[52] Alexei A. Efros,et al. Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[53] Bernhard Schölkopf,et al. AdaGAN: Boosting Generative Models , 2017, NIPS.
[54] David Berthelot,et al. BEGAN: Boundary Equilibrium Generative Adversarial Networks , 2017, ArXiv.
[55] Yiannis Demiris,et al. MAGAN: Margin Adaptation for Generative Adversarial Networks , 2017, ArXiv.
[56] Yiming Yang,et al. MMD GAN: Towards Deeper Understanding of Moment Matching Network , 2017, NIPS.
[57] Jerry Li,et al. Towards Understanding the Dynamics of Generative Adversarial Networks , 2017, ArXiv.
[58] Christian Ledig,et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[59] Kamalika Chaudhuri,et al. Approximation and Convergence Properties of Generative Adversarial Learning , 2017, NIPS.
[60] Tom Sercu,et al. Fisher GAN , 2017, NIPS.
[61] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.
[62] Yoshua Bengio,et al. Mode Regularized Generative Adversarial Networks , 2016, ICLR.
[63] Andreas Krause,et al. An Online Learning Approach to Generative Adversarial Networks , 2017, ICLR.
[64] Shalabh Bhatnagar,et al. Two Timescale Stochastic Approximation with Controlled Markov noise , 2015, Math. Oper. Res..