Multimodal-AI based Roadway Hazard Identification and Warning using Onboard Smartphones with Cloud-based Fusion

Road hazard is one of the significant causes of fatality in road accidents. Accurate estimation of road hazards can ensure safety and enhance the driving experience. Existing methods of road condition monitoring are time-consuming, expensive, inefficient, require much human effort, and need to be regularly updated. There is a requirement for a flexible, cost-effective, and efficient process to detect road conditions, especially road hazards. In this study, we present a new method to deal with road hazards using smartphones. Since most of the population drives cars with smartphones onboard, we aim to leverage this to detect road hazards in a more flexible, cost-effective, and efficient way. This study proposes a cloud based deep-learning road hazard detection model based on a Long-Short Term Memory network (LSTM) to detect different types of road hazards from motion data. To address the issue of large data requests for deep learning, this study proposes to fuse both simulation data and experimental data for the learning. The proposed approaches are validated by experimental tests, and the results demonstrate the accuracy of road hazard detection based on cloud-based fusion