3D Object Detection with SLS-Fusion Network in Foggy Weather Conditions Articles uri icon

authors

  • Mai, Nguyen-Anh-Minh
  • Duthon, Pierre
  • Khoudour, Louahdi
  • Crouzil, Alain
  • VELASTIN CARROZA, SERGIO ALEJANDRO

publication date

  • October 2021

start page

  • 6711

issue

  • 20

volume

  • 21

International Standard Serial Number (ISSN)

  • 1424-3210

Electronic International Standard Serial Number (EISSN)

  • 1424-8220

abstract

  • The role of sensors such as cameras or LiDAR (Light Detection and Ranging) is crucial for the environmental awareness of self-driving cars. However, the data collected from these sensors are subject to distortions in extreme weather conditions such as fog, rain, and snow. This issue could lead to many safety problems while operating a self-driving vehicle. The purpose of this study is to analyze the effects of fog on the detection of objects in driving scenes and then to propose methods for improvement. Collecting and processing data in adverse weather conditions is often more difficult than data in good weather conditions. Hence, a synthetic dataset that can simulate bad weather conditions is a good choice to validate a method, as it is simpler and more economical, before working with a real dataset. In this paper, we apply fog synthesis on the public KITTI dataset to generate the Multifog KITTI dataset for both images and point clouds. In terms of processing tasks, we test our previous 3D object detector based on LiDAR and camera, named the Spare LiDAR Stereo Fusion Network (SLS-Fusion), to see how it is affected by foggy weather conditions. We propose to train using both the original dataset and the augmented dataset to improve performance in foggy weather conditions while keeping good performance under normal conditions. We conducted experiments on the KITTI and the proposed Multifog KITTI datasets which show that, before any improvement, performance is reduced by 42.67% in 3D object detection for Moderate objects in foggy weather conditions. By using a specific strategy of training, the results significantly improved by 26.72% and keep performing quite well on the original dataset with a drop only of 8.23%. In summary, fog often causes the failure of 3D detection on driving scenes. By additional training with the augmented dataset, we significantly improve the performance of the proposed 3D object detection algorithm for self-driving cars in foggy weather conditions.

subjects

  • Computer Science

keywords

  • adverse weather conditions; foggy perception; synthetic datasets; 3d object detection; autonomous vehicles