暂无分享,去创建一个
Duen Horng Chau | Jason Martin | Shang-Tse Chen | Cory Cornelius | Cory Cornelius | Shang-Tse Chen | Jason Martin
[1] Atul Prakash,et al. Note on Attacking Object Detectors with Adversarial Stickers , 2017, ArXiv.
[2] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[3] Dawn Xiaodong Song,et al. Delving into Transferable Adversarial Examples and Black-box Attacks , 2016, ICLR.
[4] Dawn Song,et al. Physical Adversarial Examples for Object Detectors , 2018, WOOT @ USENIX Security Symposium.
[5] Dawn Song,et al. Robust Physical-World Attacks on Deep Learning Models , 2017, 1707.08945.
[6] David A. Forsyth,et al. NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles , 2017, ArXiv.
[7] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[8] Ajmal Mian,et al. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey , 2018, IEEE Access.
[9] Ali Farhadi,et al. YOLO9000: Better, Faster, Stronger , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[10] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[11] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[12] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[13] Sergio Guadarrama,et al. Speed/Accuracy Trade-Offs for Modern Convolutional Object Detectors , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Jiajun Lu,et al. Adversarial Examples that Fool Detectors , 2017, ArXiv.
[15] Alan L. Yuille,et al. Adversarial Examples for Semantic Segmentation and Object Detection , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[16] Martín Abadi,et al. Adversarial Patch , 2017, ArXiv.
[17] Pietro Perona,et al. Microsoft COCO: Common Objects in Context , 2014, ECCV.
[18] Atul Prakash,et al. Robust Physical-World Attacks on Machine Learning Models , 2017, ArXiv.
[19] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[20] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[21] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[22] Prateek Mittal,et al. Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos , 2018, ArXiv.
[23] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[24] Kaiming He,et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[25] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[26] Prateek Mittal,et al. DARTS: Deceiving Autonomous Cars with Toxic Signs , 2018, ArXiv.