暂无分享,去创建一个
Jason Yosinski | Sumanth Dathathri | Janice Lan | Rosanne Liu | Andrea Madotto | Piero Molino | Jane Hung | Eric Frank
[1] Yoshua Bengio,et al. A Neural Probabilistic Language Model , 2003, J. Mach. Learn. Res..
[2] Zhifang Sui,et al. Learning to Control the Fine-grained Sentiment for Story Ending Generation , 2019, ACL.
[3] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Christopher Potts,et al. Learning Word Vectors for Sentiment Analysis , 2011, ACL.
[5] Yoav Goldberg,et al. Adversarial Removal of Demographic Attributes from Text Data , 2018, EMNLP.
[6] Guillaume Lample,et al. Multiple-Attribute Text Rewriting , 2018, ICLR.
[7] Yiming Yang,et al. Transformer-XL: Attentive Language Models beyond a Fixed-Length Context , 2019, ACL.
[8] Yejin Choi,et al. The Curious Case of Neural Text Degeneration , 2019, ICLR.
[9] Wang Ling,et al. Better Document-Level Machine Translation with Bayes’ Rule , 2019, Transactions of the Association for Computational Linguistics.
[10] Lei Yu,et al. The Neural Noisy Channel , 2016, ICLR.
[11] Regina Barzilay,et al. Style Transfer from Non-Parallel Text by Cross-Alignment , 2017, NIPS.
[12] J. Rosenthal,et al. Optimal scaling of discrete approximations to Langevin diffusions , 1998 .
[13] Sang Joon Kim,et al. A Mathematical Theory of Communication , 2006 .
[14] Yejin Choi,et al. Learning to Write with Cooperative Discriminators , 2018, ACL.
[15] Ronald J. Williams,et al. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.
[16] Matthias Hagen,et al. Crowdsourcing a Large Corpus of Clickbait on Twitter , 2018, COLING.
[17] Graham Neubig,et al. Controlling Output Length in Neural Encoder-Decoders , 2016, EMNLP.
[18] Jianfeng Gao,et al. A Diversity-Promoting Objective Function for Neural Conversation Models , 2015, NAACL.
[19] Lantao Yu,et al. SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient , 2016, AAAI.
[20] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .
[21] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[22] Dejing Dou,et al. HotFlip: White-Box Adversarial Examples for Text Classification , 2017, ACL.
[23] Yann Dauphin,et al. Hierarchical Neural Story Generation , 2018, ACL.
[24] Alan Ritter,et al. Generating More Interesting Responses in Neural Conversation Models with Distributional Constraints , 2018, EMNLP.
[25] Mary Williamson,et al. Facebook AI’s WMT20 News Translation Task Submission , 2020, WMT.
[26] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[27] Christopher Potts,et al. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank , 2013, EMNLP.
[28] N. Metropolis,et al. Equation of State Calculations by Fast Computing Machines , 1953, Resonance.
[29] Hinrich Schütze,et al. Book Reviews: Foundations of Statistical Natural Language Processing , 1999, CL.
[30] Percy Liang,et al. Delete, Retrieve, Generate: a Simple Approach to Sentiment and Style Transfer , 2018, NAACL.
[31] Dongyan Zhao,et al. Plan-And-Write: Towards Better Automatic Storytelling , 2018, AAAI.
[32] Lav R. Varshney,et al. CTRL: A Conditional Transformer Language Model for Controllable Generation , 2019, ArXiv.
[33] Yoshua Bengio,et al. Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[34] Sameer Singh,et al. Universal Adversarial Triggers for Attacking and Analyzing NLP , 2019, EMNLP.
[35] Xing Shi,et al. Hafez: an Interactive Poetry Generation System , 2017, ACL.
[36] Joelle Pineau,et al. How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation , 2016, EMNLP.
[37] Samuel R. Bowman,et al. Can Unconditional Language Models Recover Arbitrary Sentences? , 2019, NeurIPS.
[38] Yun Chen,et al. A Stable and Effective Learning Strategy for Trainable Greedy Decoding , 2018, EMNLP.
[39] Eric P. Xing,et al. Controllable Text Generation , 2017, ArXiv.
[40] Graham Neubig,et al. Learning to Translate in Real-time with Neural Machine Translation , 2016, EACL.
[41] Veselin Stoyanov,et al. Simple Fusion: Return of the Language Model , 2018, WMT.
[42] Victor O. K. Li,et al. Trainable Greedy Decoding for Neural Machine Translation , 2017, EMNLP.
[43] Ben Poole,et al. Categorical Reparameterization with Gumbel-Softmax , 2016, ICLR.
[44] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[45] Jason Weston,et al. What makes a good conversation? How controllable attributes affect human judgments , 2019, NAACL.
[46] R. Tweedie,et al. Exponential convergence of Langevin distributions and their discrete approximations , 1996 .
[47] Xuanjing Huang,et al. Style Transformer: Unpaired Text Style Transfer without Disentangled Latent Representation , 2019, ACL.
[48] Yoav Goldberg,et al. Controlling Linguistic Style Aspects in Neural Language Generation , 2017, ArXiv.
[49] Alec Radford,et al. Fine-Tuning Language Models from Human Preferences , 2019, ArXiv.
[50] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[51] Nanyun Peng,et al. Towards Controllable Story Generation , 2018 .
[52] Chris Dyer,et al. Putting Machine Translation in Context with the Noisy Channel Model , 2019, ArXiv.
[53] Nathan Ng,et al. Simple and Effective Noisy Channel Modeling for Neural Machine Translation , 2019, EMNLP.
[54] Myle Ott,et al. Facebook FAIR’s WMT19 News Translation Task Submission , 2019, WMT.
[55] Sameer Singh,et al. Universal Adversarial Triggers for NLP , 2019, ArXiv.