Dirty Road Can Attack: Security of Deep Learning based Automated Lane Centering under Physical-World Attack

Automated Lane Centering (ALC) systems are convenient and widely deployed today, but also highly security and safety critical. In this work, we are the first to systematically study the security of state-of-the-art deep learning based ALC systems in their designed operational domains under physical-world adversarial attacks. We formulate the problem with a safetycritical attack goal, and a novel and domain-specific attack vector: dirty road patches. To systematically generate the attack, we adopt an optimization-based approach and overcome domain-specific design challenges such as camera frame interdependencies due to attack-influenced vehicle control, and the lack of objective function design for lane detection models. We evaluate our attack on a production ALC using 80 scenarios from real-world driving traces. The results show that our attack is highly effective with over 97.5% success rates and less than 0.903 sec average success time, which is substantially lower than the average driver reaction time. This attack is also found (1) robust to various real-world factors such as lighting conditions and view angles, (2) general to different model designs, and (3) stealthy from the driver’s view. To understand the safety impacts, we conduct experiments using software-in-the-loop simulation and attack trace injection in a real vehicle. The results show that our attack can cause a 100% collision rate in different scenarios, including when tested with common safety features such as automatic emergency braking. We also evaluate and discuss defenses.

[1]  Ding Zhao,et al.  TrafficNet: An open naturalistic driving scenario library , 2017, 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC).

[2]  Yue Zhao,et al.  Seeing isn't Believing: Practical Adversarial Attack Against Object Detectors , 2018 .

[3]  Yuval Elovici,et al.  Phantom of the ADAS: Phantom Attacks on Driver-Assistance Systems , 2020, IACR Cryptol. ePrint Arch..

[4]  Yoshua Bengio,et al.  Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.

[5]  H. Neumann,et al.  Multiple Cue Data Fusion with Particle Filters for Road Course Detection in Vision Systems , 2006, 2006 IEEE Intelligent Vehicles Symposium.

[6]  Junjie Shen,et al.  Drift with Devil: Security of Multi-Sensor Fusion based Localization in High-Level Autonomous Driving under GPS Spoofing (Extended Version) , 2020, USENIX Security Symposium.

[7]  Amit K. Roy-Chowdhury,et al.  Adversarial Perturbations Against Real-Time Video Classification Systems , 2018, NDSS.

[8]  Long Chen,et al.  Robust Lane Detection From Continuous Driving Scenes Using Deep Neural Networks , 2019, IEEE Transactions on Vehicular Technology.

[9]  Germán Ros,et al.  CARLA: An Open Urban Driving Simulator , 2017, CoRL.

[10]  Dacheng Tao,et al.  Deep Neural Network for Structural Prediction and Lane Detection in Traffic Scene , 2017, IEEE Transactions on Neural Networks and Learning Systems.

[11]  John Brewer,et al.  Functional Safety Assessment of an Automated Lane Centering System , 2018 .

[12]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[13]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .

[14]  Graham W. Taylor,et al.  Batch Normalization is a Cause of Adversarial Vulnerability , 2019, ArXiv.

[15]  Wenyuan Xu,et al.  WALNUT: Waging Doubt on the Integrity of MEMS Accelerometers with Acoustic Injection Attacks , 2017, 2017 IEEE European Symposium on Security and Privacy (EuroS&P).

[16]  Yuan Tian,et al.  Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited Queries , 2020, USENIX Security Symposium.

[17]  Francesco Borrelli,et al.  Kinematic and dynamic vehicle models for autonomous driving control design , 2015, 2015 IEEE Intelligent Vehicles Symposium (IV).

[18]  Suman Jana,et al.  Certified Robustness to Adversarial Examples with Differential Privacy , 2018, 2019 IEEE Symposium on Security and Privacy (SP).

[19]  Alan L. Yuille,et al.  Feature Denoising for Improving Adversarial Robustness , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[20]  Gudrun Klinker,et al.  Stable Road Lane Model Based on Clothoids , 2010 .

[21]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[22]  Tao Wei,et al.  Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking , 2020, ICLR.

[23]  Shinpei Kato,et al.  Autoware on Board: Enabling Autonomous Vehicles with Embedded Systems , 2018, 2018 ACM/IEEE 9th International Conference on Cyber-Physical Systems (ICCPS).

[24]  Sergey Ioffe,et al.  Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.

[25]  Eder Santana,et al.  A Commute in Data: The comma2k19 Dataset , 2018, ArXiv.

[26]  Weiqiang Ren,et al.  LaneNet: Real-Time Lane Detection Networks for Autonomous Driving , 2018, ArXiv.

[27]  Xiaogang Wang,et al.  Spatial As Deep: Spatial CNN for Traffic Scene Understanding , 2017, AAAI.

[28]  Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles , 2022 .

[29]  Rajesh Rajamani,et al.  Vehicle dynamics and control , 2005 .

[30]  Junfeng Yang,et al.  DeepXplore: Automated Whitebox Testing of Deep Learning Systems , 2017, SOSP.

[31]  Sibel Yenikaya,et al.  Keeping the vehicle on the road: A survey on on-road lane detection systems , 2013, CSUR.

[32]  Hao Chen,et al.  MagNet: A Two-Pronged Defense against Adversarial Examples , 2017, CCS.

[33]  David A. Wagner,et al.  Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.

[34]  Heinrich Daembkes,et al.  Automated Driving Safer and More Efficient Future Driving Foreword , 2017 .

[35]  David A. Wagner,et al.  Audio Adversarial Examples: Targeted Attacks on Speech-to-Text , 2018, 2018 IEEE Security and Privacy Workshops (SPW).

[36]  Daniel Cremers,et al.  Challenges in Monocular Visual Odometry: Photometric Calibration, Motion Bias, and Rolling Shutter Effect , 2017, IEEE Robotics and Automation Letters.

[37]  Sebastian Thrun,et al.  Map-Based Precision Vehicle Localization in Urban Environments , 2007, Robotics: Science and Systems.

[38]  Yongdae Kim,et al.  Illusion and Dazzle: Adversarial Optical Channel Exploits Against Lidars for Automotive Applications , 2017, CHES.

[39]  Pieter Hintjens,et al.  ZeroMQ: Messaging for Many Applications , 2013 .

[40]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[41]  Min Bai,et al.  Deep Multi-Sensor Lane Detection , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[42]  Lujo Bauer,et al.  Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.

[43]  Takenao Ohkawa,et al.  Vehicle Detection Based on Perspective Transformation Using Rear-View Camera , 2011 .

[44]  Wei Li,et al.  DeepBillboard: Systematic Physical-World Testing of Autonomous Driving Systems , 2018, 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE).

[45]  Logan Engstrom,et al.  Synthesizing Robust Adversarial Examples , 2017, ICML.

[46]  Yuchen Zhang,et al.  Defending against Whitebox Adversarial Attacks via Randomized Discretization , 2019, AISTATS.

[47]  Aditi Raghunathan,et al.  Certified Defenses against Adversarial Examples , 2018, ICLR.

[48]  Geoffrey E. Hinton,et al.  Layer Normalization , 2016, ArXiv.

[49]  Yanjun Qi,et al.  Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.

[50]  Chen Yan Can You Trust Autonomous Vehicles : Contactless Attacks against Sensors of Self-driving Vehicle , 2016 .

[51]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[52]  Eric Hamilton JPEG File Interchange Format , 2004 .

[53]  Samy Bengio,et al.  Adversarial examples in the physical world , 2016, ICLR.

[54]  Helen Loeb,et al.  Age and gender differences in emergency takeover from automated to manual driving on simulator , 2019, Traffic injury prevention.

[55]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[56]  Jin-Woo Lee,et al.  A unified framework of the automated lane centering/changing control for motion smoothness adaptation , 2012, 2012 15th International IEEE Conference on Intelligent Transportation Systems.

[57]  Zoubin Ghahramani,et al.  A study of the effect of JPG compression on adversarial images , 2016, ArXiv.

[58]  Dawn Song,et al.  Physical Adversarial Examples for Object Detectors , 2018, WOOT @ USENIX Security Symposium.

[59]  Xiaolin Hu,et al.  Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[60]  Insup Lee,et al.  Injected and Delivered: Fabricating Implicit Control over Actuation Systems by Spoofing Inertial Sensors , 2018, USENIX Security Symposium.

[61]  Lilly Irani,et al.  Amazon Mechanical Turk , 2018, Advances in Intelligent Systems and Computing.

[62]  Moongu Jeon,et al.  Key Points Estimation and Point Instance Segmentation Approach for Lane Detection , 2020, ArXiv.

[63]  Yoshua Bengio,et al.  Learning long-term dependencies with gradient descent is difficult , 1994, IEEE Trans. Neural Networks.

[64]  Suman Jana,et al.  DeepTest: Automated Testing of Deep-Neural-Network-Driven Autonomous Cars , 2017, 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE).

[65]  Morgan Quigley,et al.  ROS: an open-source Robot Operating System , 2009, ICRA 2009.

[66]  Christian Früh,et al.  Google Street View: Capturing the World at Street Level , 2010, Computer.

[67]  Cristina Nita-Rotaru,et al.  Are Self-Driving Cars Secure? Evasion Attacks Against Deep Neural Networks for Steering Angle Prediction , 2019, 2019 IEEE Security and Privacy Workshops (SPW).

[68]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[69]  Ruigang Yang,et al.  Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion based Perception in Autonomous Driving Under Physical-World Attacks , 2021, 2021 IEEE Symposium on Security and Privacy (SP).

[70]  J. Zico Kolter,et al.  Adversarial camera stickers: A physical camera-based attack on deep learning systems , 2019, ICML.

[71]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[72]  Jonathan Petit,et al.  Remote Attacks on Automated Vehicles Sensors : Experiments on Camera and LiDAR , 2015 .

[73]  Hao Wang,et al.  Robust and Precise Vehicle Localization Based on Multi-Sensor Fusion in Diverse City Scenes , 2017, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[74]  Moustapha Cissé,et al.  Countering Adversarial Images using Input Transformations , 2018, ICLR.

[75]  Martín Abadi,et al.  Adversarial Patch , 2017, ArXiv.

[76]  Duen Horng Chau,et al.  ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector , 2018, ECML/PKDD.

[77]  Satish Chandra,et al.  Identification of Free Flowing Vehicles on Two Lane Intercity Highways under Heterogeneous Traffic condition , 2017 .

[78]  Atul Prakash,et al.  Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[79]  Kevin Fu,et al.  Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving , 2019, CCS.

[80]  法律 Manual on Uniform Traffic Control Devices , 2010 .

[81]  Dejing Dou,et al.  On Adversarial Examples for Character-Level Neural Machine Translation , 2018, COLING.

[82]  Qi Alfred Chen,et al.  Fooling Perception via Location: A Case of Region-of-Interest Attacks on Traffic Light Detection in Autonomous Driving , 2021 .

[83]  J. Zico Kolter,et al.  Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.

[84]  Ting Wang,et al.  DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model , 2019, 2019 IEEE Symposium on Security and Privacy (SP).