暂无分享,去创建一个
Sepp Hochreiter | Elisabeth Rumetshofer | Angela Bitto-Nemling | Hubert Ramsauer | David Kreil | Fei Tang | Gunter Klambauer | Andreas Furst | Viet Tran | Johannes Lehner | Michael Kopp
[1] Frank Hutter,et al. Decoupled Weight Decay Regularization , 2017, ICLR.
[2] Seung Woo Lee,et al. Birdsnap: Large-Scale Fine-Grained Visual Categorization of Birds , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[3] Alexander A. Alemi,et al. On Variational Bounds of Mutual Information , 2019, ICML.
[4] Yu Wang,et al. Joint Contrastive Learning with Infinite Possibilities , 2020, NeurIPS.
[5] Benjamin Recht,et al. Measuring Robustness to Natural Distribution Shifts in Image Classification , 2020, NeurIPS.
[6] Honglak Lee,et al. An efficient framework for learning sentence representations , 2018, ICLR.
[7] Andrew Zisserman,et al. Automated Flower Classification over a Large Number of Classes , 2008, 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing.
[8] Ruslan Salakhutdinov,et al. Conditional Contrastive Learning: Removing Undesirable Information in Self-Supervised Representations , 2021, ArXiv.
[9] Benjamin Recht,et al. Do ImageNet Classifiers Generalize to ImageNet? , 2019, ICML.
[10] Yair Carmon,et al. Accuracy on the Line: on the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization , 2021, ICML.
[11] Mike Wu,et al. Conditional Negative Sampling for Contrastive Learning of Visual Representations , 2020, ICLR.
[12] M. Bethge,et al. Shortcut learning in deep neural networks , 2020, Nature Machine Intelligence.
[13] Johannes Stallkamp,et al. The German Traffic Sign Recognition Benchmark: A multi-class classification competition , 2011, The 2011 International Joint Conference on Neural Networks.
[14] Ronald F. Boisvert,et al. NIST Handbook of Mathematical Functions , 2010 .
[15] Andrew Zisserman,et al. Self-supervised Co-training for Video Representation Learning , 2020, NeurIPS.
[16] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[17] Stella X. Yu,et al. Unsupervised Feature Learning via Non-parametric Instance Discrimination , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[18] Rufin van Rullen,et al. Does language help generalization in vision models? , 2021, CONLL.
[19] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[20] Xinlei Chen,et al. Exploring Simple Siamese Representation Learning , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[21] Andreas Terzis,et al. Poisoning and Backdooring Contrastive Learning , 2021, ArXiv.
[22] Junnan Li,et al. Prototypical Contrastive Learning of Unsupervised Representations , 2020, ICLR.
[23] Gigliola Vaglini,et al. Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search , 2021, IMPROVE.
[24] Frank Hutter,et al. SGDR: Stochastic Gradient Descent with Warm Restarts , 2016, ICLR.
[25] Karsten Roth,et al. Characterizing Generalization under Out-Of-Distribution Shifts in Deep Metric Learning , 2021, NeurIPS.
[26] Jonathan Krause,et al. 3D Object Representations for Fine-Grained Categorization , 2013, 2013 IEEE International Conference on Computer Vision Workshops.
[27] Oriol Vinyals,et al. Representation Learning with Contrastive Predictive Coding , 2018, ArXiv.
[28] Ilya Sutskever,et al. Learning Transferable Visual Models From Natural Language Supervision , 2021, ICML.
[29] Laurens van der Maaten,et al. Self-Supervised Learning of Pretext-Invariant Representations , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[30] Karl Stratos,et al. Formal Limitations on the Measurement of Mutual Information , 2018, AISTATS.
[31] Michal Valko,et al. Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning , 2020, NeurIPS.
[32] Michael Tschannen,et al. On Mutual Information Maximization for Representation Learning , 2019, ICLR.
[33] Geoffrey E. Hinton,et al. A Simple Framework for Contrastive Learning of Visual Representations , 2020, ICML.
[34] Yizhou Sun,et al. On Sampling Strategies for Neural Network-based Collaborative Filtering , 2017, KDD.
[35] Yoshua Bengio,et al. Mutual Information Neural Estimation , 2018, ICML.
[36] David A. Shamma,et al. YFCC100M , 2015, Commun. ACM.
[37] Chen Change Loy,et al. Learning to Prompt for Vision-Language Models , 2021, ArXiv.
[38] J J Hopfield,et al. Neural networks and physical systems with emergent collective computational abilities. , 1982, Proceedings of the National Academy of Sciences of the United States of America.
[39] Max Welling,et al. Semi-supervised Learning with Deep Generative Models , 2014, NIPS.
[40] Zhe Gan,et al. Simpler, Faster, Stronger: Breaking The log-K Curse On Contrastive Learners With FlatNCE , 2021, ArXiv.
[41] Michael P. Wellman,et al. Explaining 'Explaining Away' , 1993, IEEE Trans. Pattern Anal. Mach. Intell..
[42] L. B. Soros,et al. CLIPDraw: Exploring Text-to-Drawing Synthesis through Language-Image Encoders , 2021, NeurIPS.
[43] Nan Duan,et al. CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval , 2021, Neurocomputing.
[44] Tianyu Gao,et al. SimCSE: Simple Contrastive Learning of Sentence Embeddings , 2021, EMNLP.
[45] Nassir Navab,et al. Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP , 2021, ArXiv.
[46] Christoph H. Lampert,et al. Learning to detect unseen object classes by between-class attribute transfer , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[47] Zhe Gan,et al. CLUB: A Contrastive Log-ratio Upper Bound of Mutual Information , 2020, ICML.
[48] Alexander Binder,et al. Unmasking Clever Hans predictors and assessing what machines really learn , 2019, Nature Communications.
[49] Aapo Hyvärinen,et al. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models , 2010, AISTATS.
[50] Radu Soricut,et al. Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning , 2018, ACL.
[51] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[52] J J Hopfield,et al. Neurons with graded response have collective computational properties like those of two-state neurons. , 1984, Proceedings of the National Academy of Sciences of the United States of America.
[53] Kaiming He,et al. Momentum Contrast for Unsupervised Visual Representation Learning , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[54] Geir Kjetil Sandve,et al. Hopfield Networks is All You Need , 2020, ArXiv.
[55] Martin J. Wainwright,et al. Estimating divergence functionals and the likelihood ratio by penalized convex risk minimization , 2007, NIPS.
[56] Geir Kjetil Sandve,et al. Modern Hopfield Networks and Attention for Immune Repertoire Classification , 2020, bioRxiv.
[57] John J. Hopfield,et al. Dense Associative Memory for Pattern Recognition , 2016, NIPS.
[58] Matthias Löwe,et al. On a Model of Associative Memory with Huge Storage Capacity , 2017, 1702.01929.
[59] Matthew Henderson,et al. Efficient Natural Language Response Suggestion for Smart Reply , 2017, ArXiv.
[60] Alexander D'Amour,et al. Underspecification Presents Challenges for Credibility in Modern Machine Learning , 2020, J. Mach. Learn. Res..
[61] Ali Razavi,et al. Data-Efficient Image Recognition with Contrastive Predictive Coding , 2019, ICML.
[62] Michael S. Bernstein,et al. On the Opportunities and Risks of Foundation Models , 2021, ArXiv.
[63] Pengfei Xiong,et al. CLIP2Video: Mastering Video-Text Retrieval via Image CLIP , 2021, ArXiv.
[64] Alec Radford,et al. Evaluating CLIP: Towards Characterization of Broader Capabilities and Downstream Implications , 2021, ArXiv.
[65] Kurt Keutzer,et al. How Much Can CLIP Benefit Vision-and-Language Tasks? , 2021, ICLR.
[66] Ali Farhadi,et al. Robust fine-tuning of zero-shot models , 2021, ArXiv.
[67] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.