Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion based Perception in Autonomous Driving Under Physical-World Attacks

In Autonomous Driving (AD) systems, perception is both security and safety critical. Despite various prior studies on its security issues, all of them only consider attacks on camera-or LiDAR-based AD perception alone. However, production AD systems today predominantly adopt a Multi-Sensor Fusion (MSF) based design, which in principle can be more robust against these attacks under the assumption that not all fusion sources are (or can be) attacked at the same time. In this paper, we present the first study of security issues of MSF-based perception in AD systems. We directly challenge the basic MSF design assumption above by exploring the possibility of attacking all fusion sources simultaneously. This allows us for the first time to understand how much security guarantee MSF can fundamentally provide as a general defense strategy for AD perception.We formulate the attack as an optimization problem to generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it. To systematically generate such a physical-world attack, we propose a novel attack pipeline that addresses two main design challenges: (1) non-differentiable target camera and LiDAR sensing systems, and (2) non-differentiable cell-level aggregated features popularly used in LiDAR-based AD perception. We evaluate our attack on MSF algorithms included in representative open-source industry-grade AD systems in real-world driving scenarios. Our results show that the attack achieves over 90% success rate across different object types and MSF algorithms. Our attack is also found stealthy, robust to victim positions, transferable across MSF algorithms, and physical-world realizable after being 3D-printed and captured by LiDAR and camera devices. To concretely assess the end-to-end safety impact, we further perform simulation evaluation and show that it can cause a 100% vehicle collision rate for an industry-grade AD system. We also evaluate and discuss defense strategies.

[1]  Francis Schmitt,et al.  Mesh Simplification , 1996, Comput. Graph. Forum.

[2]  Dorothea Heiss-Czedik,et al.  An Introduction to Genetic Algorithms. , 1997, Artificial Life.

[3]  Andreas Geiger,et al.  Vision meets robotics: The KITTI dataset , 2013, Int. J. Robotics Res..

[4]  Insup Lee,et al.  Attack-resilient sensor fusion , 2014, 2014 Design, Automation & Test in Europe Conference & Exhibition (DATE).

[5]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[6]  Sebastian Scherer,et al.  VoxNet: A 3D Convolutional Neural Network for real-time object recognition , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[7]  Ingmar Posner,et al.  Voting for Voting in Online Point Cloud Object Detection , 2015, Robotics: Science and Systems.

[8]  Steven E. Shladover,et al.  Potential Cyberattacks on Automated Vehicles , 2015, IEEE Transactions on Intelligent Transportation Systems.

[9]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[10]  Seyed-Mohsen Moosavi-Dezfooli,et al.  DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  Ananthram Swami,et al.  The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).

[12]  Chen Yan Can You Trust Autonomous Vehicles : Contactless Attacks against Sensors of Self-driving Vehicle , 2016 .

[13]  Zoubin Ghahramani,et al.  A study of the effect of JPG compression on adversarial images , 2016, ArXiv.

[14]  Philippe Martinet,et al.  Comparison of lateral controllers for autonomous vehicle: Experimental results , 2016, 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC).

[15]  Yadong Mu,et al.  Deep Steering: Learning End-to-End Driving Model from Spatial and Temporal Visual Cues , 2017, ArXiv.

[16]  Marcelo H. Ang,et al.  Car detection for autonomous vehicle: LIDAR and vision fusion approach through deep learning framework , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[17]  Junfeng Yang,et al.  DeepXplore: Automated Whitebox Testing of Deep Learning Systems , 2017, SOSP.

[18]  Hao Chen,et al.  MagNet: A Two-Pronged Defense against Adversarial Examples , 2017, CCS.

[19]  Yongdae Kim,et al.  Illusion and Dazzle: Adversarial Optical Channel Exploits Against Lidars for Automotive Applications , 2017, CHES.

[20]  Dawn Xiaodong Song,et al.  Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong , 2017, ArXiv.

[21]  Tat-Seng Chua,et al.  Laplacian-Steered Neural Style Transfer , 2017, ACM Multimedia.

[22]  David A. Wagner,et al.  MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples , 2017, ArXiv.

[23]  Jiajun Lu,et al.  Adversarial Examples that Fool Detectors , 2017, ArXiv.

[24]  Lilly Irani,et al.  Amazon Mechanical Turk , 2018, Advances in Intelligent Systems and Computing.

[25]  Dushyant Rao,et al.  Vote3Deep: Fast object detection in 3D point clouds using efficient convolutional neural networks , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[26]  Ji Wan,et al.  Multi-view 3D Object Detection Network for Autonomous Driving , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[27]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[28]  Martín Abadi,et al.  Adversarial Patch , 2017, ArXiv.

[29]  Fernando García,et al.  BirdNet: A 3D Object Detection Framework from LiDAR Information , 2018, 2018 21st International Conference on Intelligent Transportation Systems (ITSC).

[30]  Yue Zhao,et al.  Seeing isn't Believing: Practical Adversarial Attack Against Object Detectors , 2018 .

[31]  Bin Yang,et al.  PIXOR: Real-time 3D Object Detection from Point Clouds , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[32]  Wenyuan Xu,et al.  Analyzing and Enhancing the Security of Ultrasonic Sensors for Autonomous Vehicles , 2018, IEEE Internet of Things Journal.

[33]  Steven Lake Waslander,et al.  Joint 3D Proposal Generation and Object Detection from View Aggregation , 2017, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[34]  Pin-Yu Chen,et al.  Bypassing Feature Squeezing by Increasing Adversary Strength , 2018, ArXiv.

[35]  Mingyan Liu,et al.  Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation , 2018, ECCV.

[36]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[37]  Tatsuya Harada,et al.  Neural 3D Mesh Renderer , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[38]  Yin Zhou,et al.  VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[39]  Bin Yang,et al.  Deep Continuous Fusion for Multi-sensor 3D Object Detection , 2018, ECCV.

[40]  Danfei Xu,et al.  PointFusion: Deep Sensor Fusion for 3D Bounding Box Estimation , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[41]  Logan Engstrom,et al.  Synthesizing Robust Adversarial Examples , 2017, ICML.

[42]  Peng Liu,et al.  RoboADS: Anomaly Detection Against Sensor and Actuator Misbehaviors in Mobile Robots , 2018, 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN).

[43]  Yanjun Qi,et al.  Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.

[44]  Marcelo H. Ang,et al.  A General Pipeline for 3D Detection of Vehicles , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[45]  Alexei A. Efros,et al.  The Unreasonable Effectiveness of Deep Features as a Perceptual Metric , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[46]  Dawn Song,et al.  Physical Adversarial Examples for Object Detectors , 2018, WOOT @ USENIX Security Symposium.

[47]  Insup Lee,et al.  Injected and Delivered: Fabricating Implicit Control over Actuation Systems by Spoofing Inertial Sensors , 2018, USENIX Security Symposium.

[48]  Suman Jana,et al.  DeepTest: Automated Testing of Deep-Neural-Network-Driven Autonomous Cars , 2017, 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE).

[49]  Mingyan Liu,et al.  Spatially Transformed Adversarial Examples , 2018, ICLR.

[50]  Mingyan Liu,et al.  Generating Adversarial Examples with Adversarial Networks , 2018, IJCAI.

[51]  Duen Horng Chau,et al.  ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector , 2018, ECML/PKDD.

[52]  Raquel Urtasun,et al.  End-to-end Learning of Multi-sensor 3D Tracking by Detection , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[53]  Atul Prakash,et al.  Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[54]  Kevin Eykholt Designing and Evaluating Physical Adversarial Attacks and Defenses for Machine Learning Algorithms , 2019 .

[55]  Nicolas Papernot,et al.  Rearchitecting Classification Frameworks For Increased Robustness , 2019, 1905.10900.

[56]  Bo Li,et al.  MeshAdv: Adversarial Meshes for Visual Recognition , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[57]  Jiong Yang,et al.  PointPillars: Fast Encoders for Object Detection From Point Clouds , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[58]  Suman Jana,et al.  Certified Robustness to Adversarial Examples with Differential Privacy , 2018, 2019 IEEE Symposium on Security and Privacy (SP).

[59]  Toon Goedemé,et al.  Fooling Automated Surveillance Cameras: Adversarial Patches to Attack Person Detection , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[60]  Haojie Li,et al.  Accurate Monocular 3D Object Detection via Color-Embedded 3D Reconstruction for Autonomous Driving , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[61]  Xindong Wu,et al.  Object Detection With Deep Learning: A Review , 2018, IEEE Transactions on Neural Networks and Learning Systems.

[62]  Bin Yang,et al.  Multi-Task Multi-Sensor Fusion for 3D Object Detection , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[63]  Haichao Zhang,et al.  Towards Adversarially Robust Object Detection , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[64]  Mingjie Sun,et al.  Characterizing Attacks on Deep Reinforcement Learning , 2019, AAMAS.

[65]  J. Vince Curvature , 2019, Calculus for Computer Graphics.

[66]  Kimin Lee,et al.  Using Pre-Training Can Improve Model Robustness and Uncertainty , 2019, ICML.

[67]  Qingmin Liao,et al.  A Study on Radar Target Detection Based on Deep Neural Networks , 2019, IEEE Sensors Letters.

[68]  Chong Xiang,et al.  Generating 3D Adversarial Point Clouds , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[69]  Larry S. Davis,et al.  Adversarial Training for Free! , 2019, NeurIPS.

[70]  J. Zico Kolter,et al.  Adversarial camera stickers: A physical camera-based attack on deep learning systems , 2019, ICML.

[71]  Mani Srivastava,et al.  GenAttack: practical black-box attacks with gradient-free optimization , 2018, GECCO.

[72]  Kevin Fu,et al.  Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving , 2019, CCS.

[73]  J. Zico Kolter,et al.  Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.

[74]  Tsung-Yi Ho,et al.  Robust Adversarial Objects against Deep Learning Models , 2020, AAAI.

[75]  Yuval Elovici,et al.  Phantom of the ADAS: Phantom Attacks on Driver-Assistance Systems , 2020, IACR Cryptol. ePrint Arch..

[76]  Haibin Ling,et al.  SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image Classifiers , 2020, 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR).

[77]  Junjie Shen,et al.  Drift with Devil: Security of Multi-Sensor Fusion based Localization in High-Level Autonomous Driving under GPS Spoofing (Extended Version) , 2020, USENIX Security Symposium.

[78]  Honglak Lee,et al.  SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing , 2019, ECCV.

[79]  Ming Li,et al.  GhostImage: Perception Domain Attacks against Vision-based Object Classification Systems , 2020, ArXiv.

[80]  Tao Wei,et al.  Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking , 2020, ICLR.

[81]  J. Zico Kolter,et al.  Fast is better than free: Revisiting adversarial training , 2020, ICLR.

[82]  Ting Wang,et al.  Interpretable Deep Learning under Fire , 2018, USENIX Security Symposium.

[83]  SoK: Certified Robustness for Deep Neural Networks , 2020, ArXiv.

[84]  Ilya P. Razenshteyn,et al.  Randomized Smoothing of All Shapes and Sizes , 2020, ICML.

[85]  Seon Joo Kim,et al.  Investigating Loss Functions for Extreme Super-Resolution , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[86]  Ningfei Wang,et al.  Hold Tight and Never Let Go: Security of Deep Learning based Automated Lane Centering under Physical-World Attack , 2020, ArXiv.

[87]  Yu Cheng,et al.  Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[88]  Qi Alfred Chen,et al.  Towards Robust LiDAR-based Perception in Autonomous Driving: General Black-box Adversarial Sensor Attack and Countermeasures , 2020, USENIX Security Symposium.

[89]  CG-ATTACK: Modeling the Conditional Distribution of Adversarial Perturbations to Boost Black-Box Attack. , 2020, 2006.08538.

[90]  Jairo Giraldo,et al.  SAVIOR: Securing Autonomous Vehicles with Robust Physical Invariants , 2020, USENIX Security Symposium.

[91]  Kibok Lee,et al.  ShapeAdv: Generating Shape-Aware Adversarial 3D Point Clouds , 2020, ArXiv.

[92]  Alexander Carballo,et al.  A Survey of Autonomous Driving: Common Practices and Emerging Technologies , 2019, IEEE Access.

[93]  El Houssaine Tissir,et al.  Optimization of Higher-Order Sliding Mode Control Parameter using Particle Swarm Optimization for Lateral Dynamics of Autonomous Vehicles , 2020, 2020 1st International Conference on Innovative Research in Applied Science, Engineering and Technology (IRASET).

[94]  Mohammed Bennamoun,et al.  Deep Learning for 3D Point Clouds: A Survey , 2019, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[95]  Qi Alfred Chen,et al.  WIP: Deployability Improvement, Stealthiness User Study, and Safety Impact Assessment on Real Vehicle for Dirty Road Patch Attack , 2021 .

[96]  Qi Alfred Chen,et al.  WIP: End-to-End Analysis of Adversarial Attacks to Automated Lane Centering Systems , 2021 .

[97]  Qi Alfred Chen,et al.  Fooling Perception via Location: A Case of Region-of-Interest Attacks on Traffic Light Detection in Autonomous Driving , 2021 .

[98]  Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles , 2022 .