Do Deep Convolutional Nets Really Need to be Deep and Convolutional?
暂无分享,去创建一个
Matthew Richardson | Rich Caruana | Matthai Philipose | Abdel-rahman Mohamed | Krzysztof J. Geras | Shengjie Wang | Krzysztof Geras | Samira Ebrahimi Kahou | Gregor Urban | Özlem Aslan | Abdel-rahman Mohamed | S. Kahou | R. Caruana | Shengjie Wang | Matthai Philipose | Matthew Richardson | G. Urban | Özlem Aslan
[1] R. Srikant,et al. Why Deep Neural Networks for Function Approximation? , 2016, ICLR.
[2] Ruslan Salakhutdinov,et al. Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning , 2015, ICLR.
[3] Razvan Pascanu,et al. Theano: new features and speed improvements , 2012, ArXiv.
[4] Yoshua Bengio,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[5] Roland Memisevic,et al. Zero-bias autoencoders and the benefits of co-adapting features , 2014, ICLR.
[6] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[7] Prabhat,et al. Scalable Bayesian Optimization Using Deep Neural Networks , 2015, ICML.
[8] George Cybenko,et al. Approximation by superpositions of a sigmoidal function , 1989, Math. Control. Signals Syst..
[9] Roland Memisevic,et al. How far can we go without convolution: Improving fully-connected networks , 2015, ArXiv.
[10] Yoshua Bengio,et al. Big Neural Networks Waste Capacity , 2013, ICLR.
[11] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[12] Rich Caruana,et al. Do Deep Nets Really Need to be Deep? , 2013, NIPS.
[13] Jürgen Schmidhuber,et al. Training Very Deep Networks , 2015, NIPS.
[14] Alexander J. Smola,et al. Fastfood - Computing Hilbert Space Expansions in loglinear time , 2013, ICML.
[15] Dong Yu,et al. Conversational Speech Transcription Using Context-Dependent Deep Neural Networks , 2012, ICML.
[16] Xinyun Chen. Under Review as a Conference Paper at Iclr 2017 Delving into Transferable Adversarial Ex- Amples and Black-box Attacks , 2016 .
[17] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[18] William Chan,et al. Transferring knowledge from a RNN to a DNN , 2015, INTERSPEECH.
[19] Yifan Gong,et al. Learning small-size DNN with output-distribution-based criteria , 2014, INTERSPEECH.
[20] Antonio Torralba,et al. Ieee Transactions on Pattern Analysis and Machine Intelligence 1 80 Million Tiny Images: a Large Dataset for Non-parametric Object and Scene Recognition , 2022 .
[21] Charles A. Sutton,et al. Scheduled denoising autoencoders , 2015, ICLR.
[22] Razvan Pascanu,et al. Policy Distillation , 2015, ICLR.
[23] R. Srikant,et al. Why Deep Neural Networks? , 2016, ArXiv.
[24] Amnon Shashua,et al. Convolutional Rectifier Networks as Generalized Tensor Decompositions , 2016, ICML.
[25] Yann LeCun,et al. Understanding Deep Architectures using a Recursive Convolutional Network , 2013, ICLR.
[26] Omer Levy,et al. Published as a conference paper at ICLR 2018 S IMULATING A CTION D YNAMICS WITH N EURAL P ROCESS N ETWORKS , 2018 .
[27] Jasper Snoek,et al. Practical Bayesian Optimization of Machine Learning Algorithms , 2012, NIPS.
[28] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[29] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[30] Matthew Richardson,et al. Blending LSTMs into CNNs , 2015, ICLR 2016.
[31] Yoshua Bengio,et al. Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.