WIP: End-to-End Analysis of Adversarial Attacks to Automated Lane Centering Systems

Machine learning techniques, particularly those based on deep neural networks (DNNs), are widely adopted in the development of advanced driver-assistance systems (ADAS) and autonomous vehicles. While providing significant improvement over traditional methods in average performance, the usage of DNNs also presents great challenges to system safety, especially given the uncertainty of the surrounding environment, the disturbance to system operations, and the current lack of methodologies for predicting DNN behavior. In particular, adversarial attacks to the sensing input may cause errors in systems’ perception of the environment and lead to system failure. However, existing works mainly focus on analyzing the impact of such attacks on the sensing and perception results and designing mitigation strategies accordingly. We argue that as system safety is ultimately determined by the actions it takes, it is essential to take an end-to-end approach and address adversarial attacks with the consideration of the entire ADAS or autonomous driving pipeline, from sensing and perception to planing, navigation and control. In this paper, we present our initial findings in quantitatively analyzing the impact of a type of adversarial attack (that leverages road patch) on system planning and control, and discuss some of the possible directions to systematically address such attack with an end-to-end view.

[1]  Maximilian Baust,et al.  Learning in an Uncertain World: Representing Ambiguity Through Multiple Hypotheses , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).

[2]  Seyed-Mohsen Moosavi-Dezfooli,et al.  DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[3]  Shichao Xu,et al.  Safety-Assured Design and Adaptation of Learning-Enabled Autonomous Systems , 2021, 2021 26th Asia and South Pacific Design Automation Conference (ASP-DAC).

[4]  Jiameng Fan,et al.  Know the Unknowns: Addressing Disturbances and Uncertainties in Autonomous Systems : Invited Paper , 2020, 2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD).

[5]  Q. Lu,et al.  LGSVL Simulator: A High Fidelity Simulator for Autonomous Driving , 2020, 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC).

[6]  Ananthram Swami,et al.  Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).

[7]  Ananthram Swami,et al.  The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).

[8]  Ningfei Wang,et al.  Hold Tight and Never Let Go: Security of Deep Learning based Automated Lane Centering under Physical-World Attack , 2020, ArXiv.

[9]  Wei Li,et al.  DeepBillboard: Systematic Physical-World Testing of Autonomous Driving Systems , 2018, 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE).

[10]  John McDonald,et al.  Application of the Hough Transform to Lane Detection and Following on High Speed Roads , 2001 .

[11]  Zhihao Zheng,et al.  Robust Detection of Adversarial Attacks by Modeling the Intrinsic Properties of Deep Neural Networks , 2018, NeurIPS.

[12]  Daniel Ramos,et al.  Deconstructing Cross-Entropy for Probabilistic Binary Classifiers , 2018, Entropy.

[13]  Samy Bengio,et al.  Adversarial examples in the physical world , 2016, ICLR.

[14]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[15]  Kibok Lee,et al.  A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks , 2018, NeurIPS.

[16]  Atul Prakash,et al.  Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.