暂无分享,去创建一个
[1] J. Neumann. Zur Theorie der Gesellschaftsspiele , 1928 .
[2] I. Glicksberg. A FURTHER GENERALIZATION OF THE KAKUTANI FIXED POINT THEOREM, WITH APPLICATION TO NASH EQUILIBRIUM POINTS , 1952 .
[3] M. Zedek. Continuity and location of zeros of linear combinations of polynomials , 1965 .
[4] 丸山 徹. Convex Analysisの二,三の進展について , 1977 .
[5] M. Dufwenberg. Game theory. , 2011, Wiley interdisciplinary reviews. Cognitive science.
[6] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[7] Furong Huang,et al. Escaping From Saddle Points - Online Stochastic Gradient for Tensor Decomposition , 2015, COLT.
[8] Uriel Feige,et al. Learning and inference in the presence of corrupted inputs , 2015, COLT.
[9] Nicolas Boumal,et al. The non-convex Burer-Monteiro approach works on smooth semidefinite programs , 2016, NIPS.
[10] Michael I. Jordan,et al. Gradient Descent Only Converges to Minimizers , 2016, COLT.
[11] Elad Hazan,et al. Introduction to Online Convex Optimization , 2016, Found. Trends Optim..
[12] Georgios Piliouras,et al. Gradient Descent Only Converges to Minimizers: Non-Isolated Critical Points and Invariant Regions , 2016, ITCS.
[13] J. Zico Kolter,et al. Gradient descent GAN optimization is locally stable , 2017, NIPS.
[14] ASHISH CHERUKURI,et al. Saddle-Point Dynamics: Conditions for Asymptotic Stability of Saddle Points , 2015, SIAM J. Control. Optim..
[15] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[16] Yi Zheng,et al. No Spurious Local Minima in Nonconvex Low Rank Problems: A Unified Geometric Analysis , 2017, ICML.
[17] Yingyu Liang,et al. Generalization and Equilibrium in Generative Adversarial Nets (GANs) , 2017, ICML.
[18] Robert S. Chen,et al. Robust Optimization for Non-Convex Objectives , 2017, NIPS.
[19] Jonathan P. How,et al. Deep Decentralized Multi-task Multi-Agent Reinforcement Learning under Partial Observability , 2017, ICML.
[20] Michael I. Jordan,et al. How to Escape Saddle Points Efficiently , 2017, ICML.
[21] Andreas Krause,et al. An Online Learning Approach to Generative Adversarial Networks , 2017, ICLR.
[22] Constantinos Daskalakis,et al. Training GANs with Optimism , 2017, ICLR.
[23] Constantinos Daskalakis,et al. The Limit Points of (Optimistic) Gradient Descent in Min-Max Optimization , 2018, NeurIPS.
[24] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[25] Mingrui Liu,et al. Non-Convex Min-Max Optimization: Provable Algorithms and Applications in Machine Learning , 2018, ArXiv.
[26] Mingrui Liu,et al. Solving Weakly-Convex-Weakly-Concave Saddle-Point Problems as Weakly-Monotone Variational Inequality , 2018 .
[27] Dmitriy Drusvyatskiy,et al. Stochastic subgradient method converges at the rate O(k-1/4) on weakly convex functions , 2018, ArXiv.
[28] Volkan Cevher,et al. Finding Mixed Nash Equilibria of Generative Adversarial Networks , 2018, ICML.
[29] Ioannis Mitliagkas,et al. Negative Momentum for Improved Game Dynamics , 2018, AISTATS.
[30] S. Shankar Sastry,et al. On Finding Local Nash Equilibria (and Only Local Nash Equilibria) in Zero-Sum Games , 2019, 1901.00838.
[31] Alon Gonen,et al. Learning in Non-convex Games with an Optimization Oracle , 2018, COLT.
[32] Thomas Hofmann,et al. Local Saddle Point Optimization: A Curvature Exploitation Approach , 2018, AISTATS.