暂无分享,去创建一个
Ioannis Mitliagkas | Sandeep Subramanian | Yoshua Bengio | Jonathan Binas | Anirudh Goyal | Michael C. Mozer | Alex Lamb | Denis Kazakov | Yoshua Bengio | M. Mozer | Anirudh Goyal | Alex Lamb | Ioannis Mitliagkas | Jonathan Binas | Sandeep Subramanian | Denis Kazakov
[1] A. Reber. Implicit learning of artificial grammars , 1967 .
[2] S. Boll,et al. Suppression of acoustic noise in speech using spectral subtraction , 1979 .
[3] J J Hopfield,et al. Neurons with graded response have collective computational properties like those of two-state neurons. , 1984, Proceedings of the National Academy of Sciences of the United States of America.
[4] Geoffrey E. Hinton. Mapping Part-Whole Hierarchies into Connectionist Networks , 1990, Artif. Intell..
[5] Yann LeCun,et al. Tangent Prop - A Formalism for Specifying Selected Invariances in an Adaptive Network , 1991, NIPS.
[6] Jude W. Shavlik,et al. Learning Symbolic Rules Using Artificial Neural Networks , 1993, ICML.
[7] Pascal Koiran. Dynamics of Discrete Time, Continuous State Hopfield Networks , 1994, Neural Computation.
[8] Yoshua Bengio,et al. Learning long-term dependencies with gradient descent is difficult , 1994, IEEE Trans. Neural Networks.
[9] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[10] Sepp Hochreiter,et al. The Vanishing Gradient Problem During Learning Recurrent Neural Nets and Problem Solutions , 1998, Int. J. Uncertain. Fuzziness Knowl. Based Syst..
[11] John J. Hopfield,et al. Neural networks and physical systems with emergent collective computational abilities , 1999 .
[12] M. Mozer. Attractor Networks , 2000 .
[13] M. Masson. Using confidence intervals for graphically based data interpretation. , 2003, Canadian journal of experimental psychology = Revue canadienne de psychologie experimentale.
[14] Hava T. Siegelmann,et al. Analog-symbolic memory that tracks via reconsolidation , 2008 .
[15] Pascal Vincent,et al. Representation Learning: A Review and New Perspectives , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[16] Yoshua Bengio,et al. Regularized Auto-Encoders Estimate Local Statistics , 2012, ICLR.
[17] Yoshua Bengio,et al. Better Mixing via Deep Representations , 2012, ICML.
[18] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[19] Luca Rigazio,et al. Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.
[20] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[21] Samy Bengio,et al. Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks , 2015, NIPS.
[22] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[23] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[24] Renjie Liao,et al. Learning Deep Parsimonious Representations , 2016, NIPS.
[25] Jian Sun,et al. Identity Mappings in Deep Residual Networks , 2016, ECCV.
[26] Yang Song,et al. Improving the Robustness of Deep Neural Networks via Stability Training , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[27] Alexandros G. Dimakis,et al. The Robust Manifold Defense: Adversarial Training using Generative Models , 2017, ArXiv.
[28] Yoshua Bengio. The Consciousness Prior , 2017, ArXiv.
[29] Hao Chen,et al. MagNet: A Two-Pronged Defense against Adversarial Examples , 2017, CCS.
[30] Yoshua Bengio,et al. Improving Generative Adversarial Networks with Denoising Feature Matching , 2016, ICLR.
[31] Martín Abadi,et al. Adversarial Patch , 2017, ArXiv.
[32] Jun Zhu,et al. Towards Robust Detection of Adversarial Examples , 2017, NeurIPS.
[33] Ioannis Mitliagkas,et al. Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden Representations , 2018, ArXiv.
[34] Roberto Caldelli,et al. Adversarial Examples Detection in Features Distance Spaces , 2018, ECCV Workshops.
[35] Kibok Lee,et al. Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples , 2017, ICLR.
[36] Michael C. Mozer,et al. State-Denoised Recurrent Neural Networks , 2018, ArXiv.
[37] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[38] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[39] Aleksander Madry,et al. Adversarially Robust Generalization Requires More Data , 2018, NeurIPS.
[40] Colin Raffel,et al. Thermometer Encoding: One Hot Way To Resist Adversarial Examples , 2018, ICLR.
[41] Yanjun Qi,et al. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.
[42] Yi Zhang,et al. Stronger generalization bounds for deep nets via a compression approach , 2018, ICML.
[43] Xiaolin Hu,et al. Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[44] Kibok Lee,et al. A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks , 2018, NeurIPS.
[45] Martin Wattenberg,et al. Adversarial Spheres , 2018, ICLR.
[46] Dan Klein,et al. Learning with Latent Language , 2017, NAACL.
[47] Rama Chellappa,et al. Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models , 2018, ICLR.