Multimodal-AI based Roadway Hazard Identification and Warning using Onboard Smartphones with Cloud-based Fusion

Description: The research team has dealt with the problem of detecting pavement conditions using in-vehicle smartphones in their previous Center for Connected Multimodal Mobility (C2M2) project. In this proposal, with the suggestion of SC-DOT on their demands, the team is going to leverage their existing achievements and achieve the following new objectives: Objective 1: Collect and annotate smartphone-based roadway hazards data. Objective 2: Investigate state-of-the-art machine learning-based approaches to detect multiple types of roadway hazards and their level of threat. Objective 3: Investigate a cloud-based fusion approach to fuse all roadway hazard detection results from different vehicles in the cloud in order to give holistic, accurate, and complete monitoring of roadway hazards and their threat levels. Intellectual Merit: The merit of the project can be summarized below: (1) Develop a very cost-effective approach to identify roadway hazards using in-vehicle smartphones of public vehicle users; (2) Develop a novel cloud-based collaborative roadway hazard monitoring approach with multi-modal-multi-output deep learning-based hazard detection and threat level estimation to deliver holistic, accurate, and complete monitoring of road conditions; and (3) Create a smartphone-based roadway hazard dataset for training road hazard detection models. Broader Impacts: The major expected impacts are summarized below: (1) Provide a very cost-effective way of identifying roadway hazards with a minimum investment of equipment and labor. (2) Significantly improve the safety of transportation systems, especially the multimodal connected and automated transportation systems, by providing timely needed roadway hazard information. (3) Create a smartphone-based roadway hazard dataset to benefit the research society.