PRBonn / LiDAR-MOS

(LMNet) Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach Exploiting Sequential Data (RAL/IROS 2021)
MIT License
601 stars 105 forks source link

Tweaking the model for partial azimuth FOV Lidar #45

Open boazMgm opened 2 years ago

boazMgm commented 2 years ago

Hi, My Lidar's azimuth FOV is only ~100 [deg] . What would be the best way to tweak the model or some configuration so it will work? Currently the range images (and also the residual images) are very sparse at the right and left sides and I think that is one of the reason for the bad performance I get. Thanks

Chen-Xieyuanli commented 2 years ago

Hey @boazMgm, you are right. Range image-based method may not work well with low-resolution LiDAR sensors either in azimuth or inclination. To get good performance, you may try to use a 3D CNN operating directly on point clouds.

We are currently just working on one 3D-CNN-based LiDAR-MOS method. We will submit it to IROS today and will also release the code soon.

boazMgm commented 2 years ago

Thanks :) My Lidar has only 32 channels instead of the 64 in the Kitti dataset. It has also limited fov in the azimuth of 100 [deg]. I thought of the following tweaks:

  1. generating the residual images with 32 pixels in the height (instead of 64).
  2. changing the spherical projection to: proj_x = 0.5 (yaw / (cnp.pi) + 1.0) where c = 100/360

I have tried both but I still don't get the results I have expected. Is there anything you think may help?

Chen-Xieyuanli commented 2 years ago

Thanks :) My Lidar has only 32 channels instead of the 64 in the Kitti dataset. It has also limited fov in the azimuth of 100 [deg]. I thought of the following tweaks:

  1. generating the residual images with 32 pixels in the height (instead of 64).
  2. changing the spherical projection to: proj_x = 0.5 (yaw / (cnp.pi) + 1.0) where c = 100/360

I have tried both but I still don't get the results I have expected. Is there anything you think may help?

One thing you should check is the fov parameters in inclination. For a 64-beam Velodyne is fov_up=3.0, fov_down=-25.0, and they should be different for a 32-beam LiDAR.

Changing the projection function could be an interesting idea, and we haven't tested it before.

Let's keep this issue open and see whether any other interesting ideas pop up from other users.

boazMgm commented 2 years ago

Thanks. just a fix: c = 100/180

Psyclonus2887 commented 2 years ago

Another question, have you tested the result in small FOV LiDAR but with non-repeat scanning pattern, like livox series LiDAR? They can also generate dense point cloud, so the FOV problem may be not a big deal?

boazMgm commented 2 years ago

No I haven't. I'm using a few recordings of the VLP-32C that I have. This Lidar has 360 [deg] azimuth fov but in the recordings I have it was limited (by SW) to ~100 [deg]

Psyclonus2887 commented 2 years ago

Hello, it's me again. After valuable communication with the author I tried to accumulate point cloud for 1s, which lead to a 100% coverage of the FoV because of its non-repeat scanning mode. The range image here using spherical projection is very dense. However, the predictions using the pretrained network are still bad. Again, all the points are classified as moving objects.

image

As the range image is really dense now, is the issue only in the azimuth? my FoV is 80x25, the projection parameter is set as below:

image