Cloud-based Collaborative Road Condition Monitoring using In-Vehicle Smartphone Data and Deep Learning

Ensuring the safety of transportation systems requires monitoring the conditions of roads. Traditional monitoring and inspection of road conditions require surveyors to walk or drive along the roads to search for defects manually. Such processes require a lot of human and equipment efforts, which however can still hardly provide real-time information on road conditions. Existing automated road condition monitoring approaches usually require special vehicles equipped with specific sensors and corresponding processing and computing devices. In addition, these approaches only use one single vehicle to perform the detection on its own and the vehicle usually still needs to be driven by a surveyor. Therefore, in this project, we developed a much more cost-effective approach to monitoring road conditions by cloud-based collaborative monitoring using in-vehicle smartphones which could come from any public vehicle user. When a vehicle drives over a certain type of road defect, the acceleration signal, especially the vertical acceleration, will have a unique pattern in the trajectory. The type of road defect can be identified from the general shape of the acceleration wave. Meanwhile, the amplitude of the wave reflects the vehicle speed and the severity of the defect. In this project, we trained a Long Short-Term Memory (LSTM) based deep learning network to complete the identification of defect types using acceleration data. Sometimes LSTM could have difficulty in deciding the defect type solely based on accelerations since the smartphone would be placed in the passenger cabin, and the motion it measures will be filtered by the vehicle suspension. Thus, we trained YOLO (You Only Look Once) deep learning network to detect and identify defects from the live video taken by the smartphone’s camera. We then fused the road condition detection results of multiple deep learning approaches from smartphones of multiple vehicles in order to get holistic monitoring of the road condition. The data including the smartphone motion and vision-based road condition detection results and the GPS locations of the vehicles would be sent to a cloud server through cellular networks. All detection results were then fused with the k-means clustering method based on their GPS locations, and the top three most occurred types of damage within a cluster were found to represent the road condition of that location. We developed a data collection app to collect acceleration and vision data from smartphones mounted on the windshield of multiple cars. The data used for this experiment was collected over various roads of Greenville, Spartanburg, Clemson, and Columbia area in South Carolina, USA. Eventually, we were able to get an accuracy of 94% from the trained LSTM model, and 87.5% from the trained YOLO in classifying potholes, cracks, and normal road surfaces. We also created a web page that displays the fusion results of detected road damage on a map. The web page enables concerned authorities to view the road damages reported by the users with the help of our developed mobile application