RobustFieldAutonomyLab / LeGO-LOAM

LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain
BSD 3-Clause "New" or "Revised" License
2.32k stars 1.11k forks source link

Accuracy of using non-uniform lidar data distribution #252

Open robertsenputras opened 2 years ago

robertsenputras commented 2 years ago

Hi everyone,

Have anyone here tried to pass non-uniform lidar data such as RS-Lidar 32 data (Robosense) that have this kind of distribution?

RS-Lidar 32 distribution

What do you think about making a range image from this kind of data? Should I variate the vertical angular resolution for each vertical scan, or we can force it to have the same angular resolution? I have a concern about the accuracy of the SLAM. I saw that both method has different range image representationshown in the image below. image Top: same angular resolution, bottom: different.

Are you guys have any idea how to pass this kind of lidar data? And how about the accuracy difference?

Thank you.

L-Reichardt commented 1 year ago

You can implement your own parameters matching your LiDAR in the following file

robertsenputras commented 1 year ago

Thanks for your answer.

I want to ask a follow-up question about this. Since imageProjection.cpp produces 2D range image and it is used by the feature extraction to extract edge and surface points, will it affect the performance of the map creation? I notice, if I register the uneven lidar distribution with laser id, I got more-more surface and edge points whereas if I register it using the linear equation (like the one inside imageProjection), not all the lidar data are assigned to the range image so it has less surface and less edge. Here's the picture.

note: I'm still use 2D matrix which has dimensions of num_channelnum_columns.

What should I use then? will more surface points increase the surface feature matching performance while the lidar data distribution is not linear? I'm afraid, matching is much worse

L-Reichardt commented 1 year ago

@robertsenputras The sensor I am working with has a uniform vertical angle, so I do not have experience with your problem. Also I am not familiar with the exact methods for feature extraction inside LegoLoam. My gut feeling is that having empty pixels inside the range image (bottom range image) could be detrimental to the feature extraction. Skimming over the paper, it looks like LegoLoam was designed with sensors that have a uniform distribution. My best guess is that a same angular resolution range image (top image) would work best, however its probably best tested through trial and error.