暂无分享,去创建一个
R. Thomas McCoy | Tal Linzen | Paul Smolensky | Paul Soulos | Tom McCoy | P. Smolensky | Tal Linzen | Paul Soulos
[1] Rachel Rudinger,et al. Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation , 2018, BlackboxNLP@EMNLP.
[2] Samuel R. Bowman,et al. BLiMP: A Benchmark of Linguistic Minimal Pairs for English , 2019, SCIL.
[3] Roger P. Levy,et al. A Systematic Assessment of Syntactic Generalization in Neural Language Models , 2020, ACL.
[4] Wayne A. Wickelgran. Context-sensitive coding, associative memory, and serial order in (speech) behavior. , 1969 .
[5] Liang Zhao,et al. Compositional Generalization for Primitive Substitutions , 2019, EMNLP.
[6] Jacob Andreas,et al. Measuring Compositionality in Representation Learning , 2019, ICLR.
[7] Afra Alishahi,et al. Correlating Neural and Symbolic Representations of Language , 2019, ACL.
[8] Yonatan Belinkov,et al. Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks , 2016, ICLR.
[9] Jakob Uszkoreit,et al. A Decomposable Attention Model for Natural Language Inference , 2016, EMNLP.
[10] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[11] George Kurian,et al. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation , 2016, ArXiv.
[12] C. Lee Giles,et al. Extraction of rules from discrete-time recurrent neural networks , 1996, Neural Networks.
[13] Koustuv Sinha,et al. Probing Linguistic Systematicity , 2020, ACL.
[14] Benoît Sagot,et al. What Does BERT Learn about the Structure of Language? , 2019, ACL.
[15] Illtyd Trethowan. Causality , 1938 .
[16] Yonatan Belinkov,et al. Analysis Methods in Neural Language Processing: A Survey , 2018, TACL.
[17] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[18] Alex Wang,et al. What do you learn from context? Probing for sentence structure in contextualized word representations , 2019, ICLR.
[19] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[20] Mathijs Mul,et al. The compositionality of neural networks: integrating symbolism and connectionism , 2019, ArXiv.
[21] Geoffrey E. Hinton. Tensor Product Variable Binding and the Representation of Symbolic Structures in Connectionist Systems , 1991 .
[22] Marco Baroni,et al. Generalization without Systematicity: On the Compositional Skills of Sequence-to-Sequence Recurrent Networks , 2017, ICML.
[23] P. Smolensky. THE CONSTITUENT STRUCTURE OF CONNECTIONIST MENTAL STATES: A REPLY TO FODOR AND PYLYSHYN , 2010 .
[24] Omer Levy,et al. Deep RNNs Encode Soft Hierarchical Syntax , 2018, ACL.
[25] Douwe Kiela,et al. SentEval: An Evaluation Toolkit for Universal Sentence Representations , 2018, LREC.
[26] Quoc V. Le,et al. Sequence to Sequence Learning with Neural Networks , 2014, NIPS.
[27] P. Smolensky. Connectionism, Constituency, and the Language of Thought ; CU-CS-416-88 , 1988 .
[28] Ivan Titov,et al. Information-Theoretic Probing with Minimum Description Length , 2020, EMNLP.
[29] Christopher D. Manning,et al. A Structural Probe for Finding Syntax in Word Representations , 2019, NAACL.
[30] Aaron C. Courville,et al. Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks , 2018, ICLR.
[31] Dan Klein,et al. Accurate Unlexicalized Parsing , 2003, ACL.
[32] R. Thomas McCoy,et al. Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference , 2019, ACL.
[33] J. Fodor,et al. Connectionism and cognitive architecture: A critical analysis , 1988, Cognition.
[34] Emmanuel Dupoux,et al. Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies , 2016, TACL.
[35] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[36] J. Fodor,et al. Connectionism and the problem of systematicity: Why Smolensky's solution doesn't work , 1990, Cognition.
[37] Willem H. Zuidema,et al. Visualisation and 'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure , 2017, J. Artif. Intell. Res..
[38] Yoshua Bengio,et al. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.
[39] Florian Mohnert,et al. Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information , 2018, BlackboxNLP@EMNLP.
[40] J. Fodor. Connectionism and the problem of systematicity (continued): why Smolensky's solution still doesn't work , 1997, Cognition.
[41] Jacob Andreas,et al. Compositional Explanations of Neurons , 2020, NeurIPS.
[42] Mathijs Mul,et al. Compositionality Decomposed: How do Neural Networks Generalise? , 2019, J. Artif. Intell. Res..
[43] Omer Levy,et al. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.
[44] Luke S. Zettlemoyer,et al. Dissecting Contextual Word Embeddings: Architecture and Representation , 2018, EMNLP.
[45] Yonatan Belinkov,et al. Evaluating Layers of Representation in Neural Machine Translation on Part-of-Speech and Semantic Tagging Tasks , 2017, IJCNLP.
[46] Li Deng,et al. Question-Answering with Grammatically-Interpretable Representations , 2017, AAAI.
[47] Samuel J. Gershman,et al. Analyzing machine-learned representations: A natural language case study , 2019, Cogn. Sci..
[48] Guillaume Lample,et al. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties , 2018, ACL.
[49] Christopher Potts,et al. A large annotated corpus for learning natural language inference , 2015, EMNLP.
[50] Geoffrey Zweig,et al. Linguistic Regularities in Continuous Space Word Representations , 2013, NAACL.
[51] Holger Schwenk,et al. Supervised Learning of Universal Sentence Representations from Natural Language Inference Data , 2017, EMNLP.
[52] Julia Hirschberg,et al. V-Measure: A Conditional Entropy-Based External Cluster Evaluation Measure , 2007, EMNLP.
[53] Tal Linzen,et al. Targeted Syntactic Evaluation of Language Models , 2018, EMNLP 2018.
[54] Andy Way,et al. Investigating ‘Aspect’ in NMT and SMT: Translating the English Simple Past and Present Perfect , 2017 .
[55] Willem Zuidema,et al. Blackbox Meets Blackbox: Representational Similarity & Stability Analysis of Neural Language Models and Brains , 2019, BlackboxNLP@ACL.
[56] Yonatan Belinkov,et al. Probing the Probing Paradigm: Does Probing Accuracy Entail Task Relevance? , 2020, ArXiv.
[57] Ewan Dunbar,et al. RNNs Implicitly Implement Tensor Product Representations , 2018, ICLR.
[58] Christopher Potts,et al. A Fast Unified Model for Parsing and Sentence Understanding , 2016, ACL.
[59] Marco Baroni,et al. The emergence of number and syntax units in LSTM language models , 2019, NAACL.
[60] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[61] Christopher Potts,et al. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank , 2013, EMNLP.
[62] Sanja Fidler,et al. Skip-Thought Vectors , 2015, NIPS.
[63] Eran Yahav,et al. Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples , 2017, ICML.