Securing Deep Learning against Adversarial Attacks for Connected and Autonomous Vehicles

Description: The overall vision of this project is to develop a defense technique capable of making CAVs more resilient to adversarial attacks and therefore able to satisfy more stringent system safety and performance requirements. In order to achieve this vision, this project focuses on the following set of goals: Implement the state-of-the-art adversarial attack on a camera and Lidar fusion-based object detection algorithm in the physical world. Develop a defense technique for CAVs that utilizes camera and Lidar signals to mitigate physical adversarial attacks in the CAV perception module. This will enhance vehicle performance and improve the security and reliability of autonomous systems. Validate the proposed approach and the performance of the object detection unit on datasets with camera and Lidar signals as inputs. Evaluate and demonstrate the proposed technology on an autonomous F1/10 car testbed whose perception architecture mimics a CAV perception module. Intellectual Merit: As CAV technology is developing fast and going to enter the market soon, the proposed research addresses the problem of improving the resilience of CAVs to the possibility of adversarial attacks aimed at affecting the performance of the perception module of CAVs, therefore improving vehicle reliability and functional safety beyond currently adopted practices. The research team envisions that their technique will play an important role in securing automated vehicles and thus, accelerating the spreading of CAVs. Expected outcomes of the project fall well within the C2M2 research priority focus on artificial intelligence in multi-modal transportation cyber-physical systems. Broader Impacts: The overarching of this project is to develop a technique to improve the trustworthiness of perception information used by CAVs. More specifically, this project proposes a defense mechanism against adversarial attacks performed on the 3D object detector in the CAV perception module. The project will focus on an autonomous vehicle architecture with a perception-planning-action pipeline4. The results of the perception module are going to be used in the planning module to execute the motion planning task. In this depicted architecture, CAV faces the challenge of obtaining correct sensing information about the surrounding environment including recognizing pedestrians and traffic signs. The proposed technique will also account for real-time computational constraints and tradeoff between accuracy and robustness. This objective will be achieved by developing a defense mechanism for CAVs to obtain trustworthy inputs from camera and Lidar by making CAVs perception modules robust to adversarial inputs. The specific objectives of the proposed research consist in developing: (a) an approach to implement adversarial attacks on CAV sensor fusion system and (b) intrinsically robust neural networks to make adversarial attacks less effective, which means to obtain correct recognition results even under adversarial attacks. The real-time evaluation of this strategy will be conducted using an autonomous F1/10 car testbed and the performance will be compared against the baseline model (non-robustified model).