暂无分享,去创建一个
Yann LeCun | Ruslan Salakhutdinov | William W. Cohen | Kaiming He | Zhilin Yang | Bhuwan Dhingra | Junbo Jake Zhao | R. Salakhutdinov | Kaiming He | Yann LeCun | Bhuwan Dhingra | Zhilin Yang | J. Zhao
[1] Xi Chen,et al. PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications , 2017, ICLR.
[2] Christopher Potts,et al. A large annotated corpus for learning natural language inference , 2015, EMNLP.
[3] Holger Schwenk,et al. Supervised Learning of Universal Sentence Representations from Natural Language Inference Data , 2017, EMNLP.
[4] Christopher Clark,et al. Simple and Effective Multi-Paragraph Reading Comprehension , 2017, ACL.
[5] Yoshua Bengio,et al. Convolutional networks for images, speech, and time series , 1998 .
[6] Andrew McCallum,et al. Linguistically-Informed Self-Attention for Semantic Role Labeling , 2018, EMNLP.
[7] Quoc V. Le,et al. QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension , 2018, ICLR.
[8] Ah Chung Tsoi,et al. The Graph Neural Network Model , 2009, IEEE Transactions on Neural Networks.
[9] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[10] Andrew McCallum,et al. Attending to All Mention Pairs for Full Abstract Biological Relation Extraction , 2017, AKBC@NIPS.
[11] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[12] Andrew M. Dai,et al. Adversarial Training Methods for Semi-Supervised Text Classification , 2016, ICLR.
[13] Stephen Clark,et al. Jointly learning sentence embeddings and syntax with unsupervised Tree-LSTMs , 2017, Natural Language Engineering.
[14] Samuel R. Bowman,et al. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference , 2017, NAACL.
[15] Yoshua Bengio,et al. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling , 2014, ArXiv.
[16] R. Zemel,et al. Neural Relational Inference for Interacting Systems , 2018, ICML.
[17] Tao Shen,et al. DiSAN: Directional Self-Attention Network for RNN/CNN-free Language Understanding , 2017, AAAI.
[18] Xiaodong Liu,et al. Stochastic Answer Networks for Natural Language Inference , 2018, ArXiv.
[19] Sanja Fidler,et al. Skip-Thought Vectors , 2015, NIPS.
[20] Geoffrey J. Gordon,et al. DeepArchitect: Automatically Designing and Training Deep Architectures , 2017, ArXiv.
[21] Ruslan Salakhutdinov,et al. Neural Models for Reasoning over Multiple Mentions Using Coreference , 2018, NAACL.
[22] Christopher Potts,et al. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank , 2013, EMNLP.
[23] Yejin Choi,et al. Dynamic Entity Representations in Neural Language Models , 2017, EMNLP.
[24] Abhinav Gupta,et al. Non-local Neural Networks , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[25] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[26] Lukasz Kaiser,et al. Generating Wikipedia by Summarizing Long Sequences , 2018, ICLR.
[27] Christopher D. Manning,et al. Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks , 2015, ACL.
[28] Ruslan Salakhutdinov,et al. Breaking the Softmax Bottleneck: A High-Rank RNN Language Model , 2017, ICLR.
[29] Gang Sun,et al. Squeeze-and-Excitation Networks , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[30] Razvan Pascanu,et al. Relational inductive biases, deep learning, and graph networks , 2018, ArXiv.
[31] Stefan Carlsson,et al. CNN Features Off-the-Shelf: An Astounding Baseline for Recognition , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops.
[32] Koray Kavukcuoglu,et al. Pixel Recurrent Neural Networks , 2016, ICML.
[33] Wang Ling,et al. Learning to Compose Words into Sentences with Reinforcement Learning , 2016, ICLR.
[34] Quoc V. Le,et al. Neural Architecture Search with Reinforcement Learning , 2016, ICLR.
[35] Samuel R. Bowman,et al. Do latent tree learning models identify meaningful structure in sentences? , 2017, TACL.
[36] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[37] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[38] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[39] Christopher Potts,et al. Learning Word Vectors for Sentiment Analysis , 2011, ACL.
[40] Jihun Choi,et al. Unsupervised Learning of Task-Specific Tree Structures with Tree-LSTMs , 2017, ArXiv.
[41] Zhen-Hua Ling,et al. Enhanced LSTM for Natural Language Inference , 2016, ACL.
[42] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[43] Dustin Tran,et al. Image Transformer , 2018, ICML.
[44] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[45] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[46] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[47] Andrew M. Dai,et al. Virtual Adversarial Training for Semi-Supervised Text Classification , 2016, ArXiv.
[48] Andrew Y. Ng,et al. Parsing Natural Scenes and Natural Language with Recursive Neural Networks , 2011, ICML.
[49] Han Zhang,et al. Self-Attention Generative Adversarial Networks , 2018, ICML.
[50] Ramesh Raskar,et al. Designing Neural Network Architectures using Reinforcement Learning , 2016, ICLR.
[51] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.