<rss version="2.0" xmlns:atom="https://www.w3.org/2005/Atom">
  <channel>
    <title>Research in Progress (RIP)</title>
    <link>https://rip.trb.org/</link>
    <atom:link href="https://rip.trb.org/Record/RSS?s=PHNlYXJjaD48cGFyYW1zPjxwYXJhbSBuYW1lPSJkYXRlaW4iIHZhbHVlPSJhbGwiIC8+PHBhcmFtIG5hbWU9InN1YmplY3Rsb2dpYyIgdmFsdWU9Im9yIiAvPjxwYXJhbSBuYW1lPSJ0ZXJtc2xvZ2ljIiB2YWx1ZT0ib3IiIC8+PHBhcmFtIG5hbWU9ImxvY2F0aW9uIiB2YWx1ZT0iMTYiIC8+PC9wYXJhbXM+PGZpbHRlcnM+PGZpbHRlciBmaWVsZD0iaW5kZXh0ZXJtcyIgdmFsdWU9IiZxdW90O1BoeXNpY2FsIHBoZW5vbWVuYSZxdW90OyIgb3JpZ2luYWxfdmFsdWU9IiZxdW90O1BoeXNpY2FsIHBoZW5vbWVuYSZxdW90OyIgLz48L2ZpbHRlcnM+PHJhbmdlcyAvPjxzb3J0cz48c29ydCBmaWVsZD0icHVibGlzaGVkIiBvcmRlcj0iZGVzYyIgLz48L3NvcnRzPjxwZXJzaXN0cz48cGVyc2lzdCBuYW1lPSJyYW5nZXR5cGUiIHZhbHVlPSJwdWJsaXNoZWRkYXRlIiAvPjwvcGVyc2lzdHM+PC9zZWFyY2g+" rel="self" type="application/rss+xml" />
    <description></description>
    <language>en-us</language>
    <copyright>Copyright © 2026. National Academy of Sciences. All rights reserved.</copyright>
    <docs>http://blogs.law.harvard.edu/tech/rss</docs>
    <managingEditor>tris-trb@nas.edu (Bill McLeod)</managingEditor>
    <webMaster>tris-trb@nas.edu (Bill McLeod)</webMaster>
    
    <item>
      <title>Identifying and patching vulnerabilities of camera-LiDAR based Autonomous Driving Systems</title>
      <link>https://rip.trb.org/View/2334603</link>
      <description><![CDATA[Autonomous driving systems rely on advanced perception models to interpret their surroundings and make real-time driving decisions. Among these, Bird’s Eye View (BEV) perception has emerged as a critical component, offering a unified 3D representation from multi-camera and sensor inputs. While BEV-based models have gained traction in industry-leading platforms, their security vulnerabilities remain largely underexplored in adversarial machine learning research. This study provides a multi-dimensional security analysis of BEV perception models, focusing on adversarial threats in both vision-only and multi-sensor fusion architectures. We examine the susceptibility of state-of-the-art models - including BEVDet, BEVDet4D, DAL, and BEVFormer—to adversarial attacks targeting their detection and decision-making capabilities. Unlike traditional adversarial research that primarily misleads perception models at the classification level, this study investigates real-world attack scenarios where adversaries can manipulate perception to cause practical disruptions, such as inducing traffic congestion or triggering unsafe vehicle behaviors. Our findings reveal significant security risks in BEV-based perception, with both vision-only and sensor-fusion models vulnerable to adversarial perturbations. Attack transferability across architectures further highlights the urgency of developing robust defense mechanisms to ensure the reliability of self-driving technology. This work underscores the need for adversarially resilient perception models to safeguard the future of autonomous driving.]]></description>
      <pubDate>Mon, 05 Feb 2024 16:04:33 GMT</pubDate>
      <guid>https://rip.trb.org/View/2334603</guid>
    </item>
    <item>
      <title>Finding Vulnerabilities of Autonomous Vehicle Stacks to Physical Adversaries</title>
      <link>https://rip.trb.org/View/2334517</link>
      <description><![CDATA[Autonomous Driving (AD) vehicles must interact and respond in real-time to multiple sensor signals indicating the behavior of other agents in the environment, such as other vehicles, and pedestrians near the ego vehicle (i.e., the vehicle itself). While autonomous vehicle (AV) developers tend to generate numerous test cases in simulations to detect safety and security problems, to the best of our knowledge, they are not testing for malicious physical interactions from attackers, such as by placing emergency cones in the hood of an AV or driving maneuvers that nearby human drivers or other AVs might perform.
The main goal of our project is to develop automatic testing tools to evaluate the safety and security of autonomous vehicle stacks against unanticipated critical physical conditions created by attackers. Specifically, we aim to demonstrate adversarial driving maneuvers in different real-world scenarios, highlighting the potential consequences for AV safety and security, build an attack framework in a simulation environment to study the optimal discovery of adversarial driving maneuvers, and contribute to the development of a skilled AV security workforce. In this way, this effort aims to enable the deployment of increasingly trustworthy transportation systems.

]]></description>
      <pubDate>Mon, 05 Feb 2024 15:58:26 GMT</pubDate>
      <guid>https://rip.trb.org/View/2334517</guid>
    </item>
  </channel>
</rss>