LiDAR Snowfall Simulation for Robust 3D Object Detection
Martin Hahner, Christos Sakaridis, Mario Bijelic, Felix Heide, Fisher Yu, Dengxin Dai, Luc Van Gool
3D object detection is a central task for applications such as autonomous driving, in which the system needs to localize and classify surrounding traffic agents, even in the presence of adverse weather. In this paper, we address the problem of LiDAR-based 3D object detection under snowfall. Due to the difficulty of collecting and annotating training data in this setting, we propose a physically based method to simulate the effect of snowfall on real clear-weather LiDAR point clouds. Our method samples snow particles in 2D space for each LiDAR line and uses the induced geometry to modify the measurement for each LiDAR beam accordingly. Moreover, as snowfall often causes wetness on the ground, we also simulate ground wetness on LiDAR point clouds. We use our simulation to generate partially synthetic snowy LiDAR data and leverage these data for training 3D object detection models that are robust to snowfall. We conduct an extensive evaluation using several state-of-the-art 3D object detection methods and show that our simulation consistently yields significant performance gains on the real snowy STF dataset compared to clear-weather baselines and competing simulation approaches, while not sacrificing performance in clear weather. Our code is available at github.com/SysCV/LiDAR_snow_sim.
Figure 1. 3D object detection results in heavy snowfall with prior training on the proposed data augmentation scheme (top right) in comparison to no augmentation (top left). The bottom row shows the RGB image as reference.
Figure 2. Simulated snowfall corresponding to a snowfall rate of rs = 2.5 mm/h. The left block shows the clear undisturbed input. The right block shows our snowfall simulation (top) and the snowfall simulation in LISA (bottom). Note that we simulate the scattering realistically and only attenuate points which are affected by individual snowflakes instead of attenuating all points based on their distance.
Figure.3: Snow particles interfering a single LiDAR beam (top). Schematic plot of corresponding received power echoes (bottom). Note how the received power of individual targets can overlap with each other (cτH ≈ 3 m with τH = 10 ns).
Figure 4. A real-world capture on a dry highway (top), a realworld capture with a water height of dw = 0.53 mm (middle) and the synthesized road wetness from the clear reference (bottom).
Figure 5: Visualization showing the geometrical optical model which describes the reflection on a wet road surface
Table 1. Comparison of simulation methods for 3D object detection in snowfall on STF . We report 3D average precision (AP) of moderate cars on three STF splits: the heavy snowfall test split with 1404 samples, the light snowfall test split with 2512 samples and the clear-weather test split with 1816 samples. “Ours-wet”: our wet ground simulation, “Ours-snow”: our snowfall simulation, “Ourssnow+wet”: cascaded application of our snowfall and wet ground simulation.
Figure 6: Qualitative comparison of PV-RCNN on samples from STF containing heavy snowfall. The leftmost column shows the corresponding RGB images. The rest of the columns show the LiDAR point clouds with ground-truth boxes and predictions using the clear-weather baseline (“no augmentation”), DROR, LISA, and our fully-fledged simulation (“our snow+wet augmentation”).