暂无分享,去创建一个
[1] Denis Lukovnikov,et al. On the regularization of Wasserstein GANs , 2017, ICLR.
[2] David Pfau,et al. Unrolled Generative Adversarial Networks , 2016, ICLR.
[3] Marc G. Bellemare,et al. The Cramer Distance as a Solution to Biased Wasserstein Gradients , 2017, ArXiv.
[4] Xiaogang Wang,et al. Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[5] Sebastian Nowozin,et al. The Numerics of GANs , 2017, NIPS.
[6] Léon Bottou,et al. Towards Principled Methods for Training Generative Adversarial Networks , 2017, ICLR.
[7] Sebastian Nowozin,et al. f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization , 2016, NIPS.
[8] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[9] Sebastian Nowozin,et al. Which Training Methods for GANs do actually Converge? , 2018, ICML.
[10] Léon Bottou,et al. Wasserstein Generative Adversarial Networks , 2017, ICML.
[11] Lars M. Mescheder,et al. On the convergence properties of GAN training , 2018, ArXiv.
[12] Nikolay Jetchev,et al. GANosaic: Mosaic Creation with Generative Texture Manifolds , 2017, ArXiv.
[13] Richard S. Zemel,et al. Generative Moment Matching Networks , 2015, ICML.
[14] J. Zico Kolter,et al. Gradient descent GAN optimization is locally stable , 2017, NIPS.
[15] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[16] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[17] Soumith Chintala,et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.
[18] Jacob Abernethy,et al. On Convergence and Stability of GANs , 2018 .
[19] Roland Vollgraf,et al. Texture Synthesis with Spatial Generative Adversarial Networks , 2016, ArXiv.
[20] Yinda Zhang,et al. LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop , 2015, ArXiv.
[21] Andrew M. Dai,et al. Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step , 2017, ICLR.
[22] Thorsten Brants,et al. One billion word benchmark for measuring progress in statistical language modeling , 2013, INTERSPEECH.
[23] Zoubin Ghahramani,et al. Training generative neural networks via Maximum Mean Discrepancy optimization , 2015, UAI.
[24] Sepp Hochreiter,et al. Coulomb GANs: Provably Optimal Nash Equilibria via Potential Fields , 2017, ICLR.
[25] Christian Ledig,et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[26] Kamalika Chaudhuri,et al. Approximation and Convergence Properties of Generative Adversarial Learning , 2017, NIPS.
[27] Roland Vollgraf,et al. Learning Texture Manifolds with the Periodic Spatial GAN , 2017, ICML.
[28] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[29] Paul R. Milgrom,et al. Envelope Theorems for Arbitrary Choice Sets , 2002 .
[30] Arthur Gretton,et al. Demystifying MMD GANs , 2018, ICLR.
[31] J. Urgen Schmidhuber,et al. Learning Factorial Codes by Predictability Minimization , 1992 .
[32] Christopher M. Bishop,et al. GTM: The Generative Topographic Mapping , 1998, Neural Computation.
[33] Jacob D. Abernethy,et al. How to Train Your DRAGAN , 2017, ArXiv.
[34] Bernhard Schölkopf,et al. Hilbert Space Embeddings and Metrics on Probability Measures , 2009, J. Mach. Learn. Res..
[35] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.
[36] Charles A. Sutton,et al. VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning , 2017, NIPS.