Ensemble Robustness and Generalization of Stochastic Deep Learning Algorithms
暂无分享,去创建一个
Shie Mannor | Jiashi Feng | Bingyi Kang | Huan Xu | Tom Zahavy | Alex Sivak | Shie Mannor | Jiashi Feng | Tom Zahavy | Huan Xu | Bingyi Kang | Alex Sivak
[1] Luca Rigazio,et al. Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.
[2] Shie Mannor,et al. Robustness and generalization , 2010, Machine Learning.
[3] Yoshua Bengio,et al. A Closer Look at Memorization in Deep Networks , 2017, ICML.
[4] Pierre Baldi,et al. Understanding Dropout , 2013, NIPS.
[5] Pierre Priouret,et al. Adaptive Algorithms and Stochastic Approximations , 1990, Applications of Mathematics.
[6] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .
[7] Colin McDiarmid,et al. Surveys in Combinatorics, 1989: On the method of bounded differences , 1989 .
[8] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[9] Yoram Singer,et al. Train faster, generalize better: Stability of stochastic gradient descent , 2015, ICML.
[10] Shin Ishii,et al. Distributional Smoothing with Virtual Adversarial Training , 2015, ICLR 2016.
[11] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[12] Prateek Jain,et al. To Drop or Not to Drop: Robustness, Consistency and Differential Privacy Properties of Dropout , 2015, ArXiv.
[13] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[14] Sergey Levine,et al. End-to-End Training of Deep Visuomotor Policies , 2015, J. Mach. Learn. Res..
[15] Lourdes Agapito,et al. Semi-supervised Learning Using an Unsupervised Atlas , 2014, ECML/PKDD.
[16] Holger Ulmer,et al. Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2017, ArXiv.
[17] Tom Schaul,et al. Prioritized Experience Replay , 2015, ICLR.
[18] Shin Ishii,et al. Distributional Smoothing by Virtual Adversarial Examples , 2015, ICLR.
[19] Uri Shaham,et al. Understanding adversarial training: Increasing local stability of supervised models through robust optimization , 2015, Neurocomputing.
[20] Leslie Pack Kaelbling,et al. Generalization in Deep Learning , 2017, ArXiv.
[21] S. C. Suddarth,et al. Rule-Injection Hints as a Means of Improving Network Performance and Learning Time , 1990, EURASIP Workshop.
[22] Yann LeCun,et al. The Loss Surfaces of Multilayer Networks , 2014, AISTATS.
[23] Sida I. Wang,et al. Dropout Training as Adaptive Regularization , 2013, NIPS.
[24] Shane Legg,et al. Human-level control through deep reinforcement learning , 2015, Nature.
[25] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.