Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving

In Autonomous Vehicles (AVs), one fundamental pillar is perception,which leverages sensors like cameras and LiDARs (Light Detection and Ranging) to understand the driving environment. Due to its direct impact on road safety, multiple prior efforts have been made to study its the security of perception systems. In contrast to prior work that concentrates on camera-based perception, in this work we perform the first security study of LiDAR-based perception in AV settings, which is highly important but unexplored. We consider LiDAR spoofing attacks as the threat model and set the attack goal as spoofing obstacles close to the front of a victim AV. We find that blindly applying LiDAR spoofing is insufficient to achieve this goal due to the machine learning-based object detection process.Thus, we then explore the possibility of strategically controlling the spoofed attack to fool the machine learning model. We formulate this task as an optimization problem and design modeling methods for the input perturbation function and the objective function.We also identify the inherent limitations of directly solving the problem using optimization and design an algorithm that combines optimization and global sampling, which improves the attack success rates to around 75%. As a case study to understand the attack impact at the AV driving decision level, we construct and evaluate two attack scenarios that may damage road safety and mobility.We also discuss defense directions at the AV system, sensor, and machine learning model levels.

[1]  Shihong Huang,et al.  Trajectory-Based Hierarchical Defense Model to Detect Cyber-Attacks on Transportation Infrastructure , 2019 .

[2]  Aaron Hunter,et al.  A Security Analysis of an In-Vehicle Infotainment and App Platform , 2016, WOOT.

[3]  Moustapha Cissé,et al.  Houdini: Fooling Deep Structured Prediction Models , 2017, ArXiv.

[4]  James Bailey,et al.  Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality , 2018, ICLR.

[5]  Trevor Darrell,et al.  Can you fool AI with adversarial examples on a visual Turing test? , 2017, ArXiv.

[6]  Mingyan Liu,et al.  Spatially Transformed Adversarial Examples , 2018, ICLR.

[7]  Yue Zhao,et al.  CommanderSong: A Systematic Approach for Practical Adversarial Voice Recognition , 2018, USENIX Security Symposium.

[8]  Dawn Song,et al.  Physical Adversarial Examples for Object Detectors , 2018, WOOT @ USENIX Security Symposium.

[9]  Chen Yan Can You Trust Autonomous Vehicles : Contactless Attacks against Sensors of Self-driving Vehicle , 2016 .

[10]  Ananthram Swami,et al.  Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.

[11]  Dan Boneh,et al.  Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.

[12]  Jinfeng Yi,et al.  Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples , 2018, AAAI.

[13]  David A. Wagner,et al.  Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.

[14]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[15]  Mingyan Liu,et al.  Generating Adversarial Examples with Adversarial Networks , 2018, IJCAI.

[16]  David Wagner,et al.  Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.

[17]  Kang G. Shin,et al.  Error Handling of In-vehicle Networks Makes Them Vulnerable , 2016, CCS.

[18]  Yu Jiang,et al.  Sensor attack detection using history based pairwise inconsistency , 2018, Future Gener. Comput. Syst..

[19]  Shihong Huang,et al.  Vulnerability of Traffic Control System Under Cyberattacks with Falsified Data , 2018 .

[20]  Atul Prakash,et al.  Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[21]  Mani Srivastava,et al.  PyCRA: Physical Challenge-Response Authentication For Active Sensors Under Spoofing Attacks , 2015, CCS.

[22]  Andrew Zisserman,et al.  Spatial Transformer Networks , 2015, NIPS.

[23]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[24]  Logan Engstrom,et al.  Synthesizing Robust Adversarial Examples , 2017, ICML.

[25]  Hovav Shacham,et al.  Comprehensive Experimental Analyses of Automotive Attack Surfaces , 2011, USENIX Security Symposium.

[26]  Jonathan Petit,et al.  Remote Attacks on Automated Vehicles Sensors : Experiments on Camera and LiDAR , 2015 .

[27]  Wenyuan Xu,et al.  Security and Privacy Vulnerabilities of In-Car Wireless Networks: A Tire Pressure Monitoring System Case Study , 2010, USENIX Security Symposium.

[28]  Insup Lee,et al.  Attack-resilient sensor fusion , 2014, 2014 Design, Automation & Test in Europe Conference & Exhibition (DATE).

[29]  Mingyan Liu,et al.  Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation , 2018, ECCV.

[30]  Yuan Yu,et al.  TensorFlow: A system for large-scale machine learning , 2016, OSDI.

[31]  Micah Sherr,et al.  Hidden Voice Commands , 2016, USENIX Security Symposium.

[32]  Yiheng Feng,et al.  Exposing Congestion Attack on Emerging Connected Vehicle based Traffic Signal Control , 2018, NDSS.

[33]  Alan L. Yuille,et al.  Adversarial Examples for Semantic Segmentation and Object Detection , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[34]  Paulo Tabuada,et al.  Non-invasive Spoofing Attacks for Anti-lock Braking Systems , 2013, CHES.

[35]  Matti Valovirta,et al.  Experimental Security Analysis of a Modern Automobile , 2011 .

[36]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[37]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[38]  Chong Xiang,et al.  Generating 3D Adversarial Point Clouds , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[39]  Yongdae Kim,et al.  Illusion and Dazzle: Adversarial Optical Channel Exploits Against Lidars for Automotive Applications , 2017, CHES.

[40]  David A. Wagner,et al.  Audio Adversarial Examples: Targeted Attacks on Speech-to-Text , 2018, 2018 IEEE Security and Privacy Workshops (SPW).

[41]  Wen-Chuan Lee,et al.  Trojaning Attack on Neural Networks , 2018, NDSS.

[42]  Dawn Song,et al.  Robust Physical-World Attacks on Deep Learning Models , 2017, 1707.08945.