RobustFieldAutonomyLab / LeGO-LOAM

LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain
BSD 3-Clause "New" or "Revised" License
2.35k stars 1.11k forks source link

bad kitti dataset test result #151

Closed xuwuzhou closed 4 years ago

xuwuzhou commented 4 years ago

图片 the result of kitti00sequence with IMU is really bad,the result looks like no loop closure,have anybody met a similar problem?

TixiaoShan commented 4 years ago

Right now the implementation for point cloud projection only works for VLP-16 and HDL-32E. Project point cloud nonlinearly and correctly on the range image can help improve accuracy.

SakuraMemoryKnight commented 4 years ago

If you are using your lidar with an IMU, make sure your IMU is aligned properly with the lidar.

xuwuzhou commented 4 years ago

Thanks for your @TixiaoShan reply! Since the lidar of Kitti data is HDL-64,the implementation for point cloud projection in lego-loam should be changed ,isn't it?

xuwuzhou commented 4 years ago

Thanks for your @zcg3648806 reply! I tested with KITTI 00sequence with IMU,and I use the calibs.txt with kitti2bag to generate a rosbag,how can I check if the IMU is aligned with the lidar.

xuwuzhou commented 4 years ago

@TixiaoShan Hi Do you mean this issue,In fact I have changed the code as the issue say,and the result is not good as mentioned. https://github.com/RobustFieldAutonomyLab/LeGO-LOAM/issues/12#issuecomment-414406582

TixiaoShan commented 4 years ago

My comment on #12 is wrong. I thought the lidar beam of HDL-64 was distributed linearly, in fact, it's not. So a customized range image projection implementation needs to be done.

xuwuzhou commented 4 years ago

@TixiaoShan Thanks for your reply! So what should I do to modify the code,generate a new code part to implement the range image projection?

TixiaoShan commented 4 years ago

@xuwuzhou You can change code in projectPointCloud() to implement it.

SakuraMemoryKnight commented 4 years ago

@xuwuzhou in fact, i have the same problem. i don't know how to align the IMU to Lidar.If you don't mind, could you tell me how to do it?Thank you!

SakuraMemoryKnight commented 4 years ago

@xuwuzhou Have you finished the new projection code with HDL-64e yet? in fact,I don't quite understandbeam the distribution for this lidar(https://github.com/ros-drivers/velodyne/blob/master/velodyne_pointcloud/tests/angles-calibrated.yaml)

mengqingyu123 commented 4 years ago

Hello, I have just started to contact with lego-loam. How did you save the trajectory generated by lego-loam? Thank you very much!

xuwuzhou commented 4 years ago

@zcg3648806 Sorry ,I'm also troubled by this problem,would you tell me if you have any progress?Thank you!

xuwuzhou commented 4 years ago

@mengqingyu123 record the topic /aft_mapped_to_init

mengqingyu123 commented 4 years ago

@xuwuzhou Thank you. I have recorded the message of /aft_mapped_to_init with rostopic echo, but its format cannot be compared with kitty's groundtruth. Have you used any tool to convert the data format? Thanks for your help!

SakuraMemoryKnight commented 4 years ago

@xuwuzhou I think one of the reasons the imu of the kitii data set didn't perform well was that there was a distance between the imu and the lidar,it's my guess. But I don't know how to use Calibration file of kitti dataset(calib_velo_to_cam.txt)

SakuraMemoryKnight commented 4 years ago

@mengqingyu123 you can try evo https://michaelgrupp.github.io/evo/

mengqingyu123 commented 4 years ago

@xuwuzhou Hello. Excuse me again. I recorded the /aft_mapped_to_init message in your way, as shown in figure 1 below, which is not available in the evo toolbox. I wrote a code in the source code to record the information related to pose, as shown in figure 2, but the data I finally got was only over 1000 lines, while the data in groundtruth was over 4000 lines. The evo toolbox required that the number of poses must be equal when drawing the kitti trajectory, so I still could not draw.

I have tried many methods, but all of them have failed. I hope you can tell me about your specific approach. Although you think the trajectory is not good enough, at least you can draw and analyze it, but I can't even draw it. Hope to get your reply, thank you! trac code

SakuraMemoryKnight commented 4 years ago

@mengqingyu123 the number of poses can be not equal when drawing the kitti trajectory with the evo tool,you can run: evo_traj kitti KITTI_00_ORB.txt KITTI_00_SPTAM.txt --ref=KITTI_00_gt.txt -p --plot_mode=xz

xuwuzhou commented 4 years ago

@mengqingyu123 Sorry,I haven't view the issue recently; You can save the poses as tum format firstly,that's easy,then you can use evo transform the result as kitti format.

xuwuzhou commented 4 years ago

Hi, @zcg3648806 Have you solved the problem? Since the function projectPointcloud in imageProjection.cpp needed to be transformed; I think the function laserCloudHandler in scanRegistration.cpp of A_LOAM can be meaningful for reference(Because the result of A_loam is good),but the problem haven't be solved up to now ,do you have any idea or progress? Best wishes!

SakuraMemoryKnight commented 4 years ago

@xuwuzhou No, I think I'm going to give up on testing the kitti dataset, and I now think the problem with the imu information is that it's too low in frequency. In the raw data of the kitti dataset, the frames of the imu and lidar are the same, but this is not reasonable. 10Hz is too low for an imu. In this algorithm, the imu data is used to correct the point cloud distortion and as the initial value of pose estimation. There is only one imu frame per radar frame, which should be the main reason for the poor performance of imu data. I think the secondary reason is that in the kitti data set, there is a certain distance between the imu installation location and the radar, which results in the imu information received by the algorithm cannot accurately represent the radar. In addition, I recently studied v-loam and found that the test performance on Kitti data set is also poor. I think the reason is the same as IMU, the image frame number of Kitti data set is the same as that of radar, and its frequency is too low. Normally, the image frame should be about 30Hz. To sum up, I think there are some problems in Kitti dataset itself.

SakuraMemoryKnight commented 4 years ago

@xuwuzhou For the problem of radar point projection, I don't think it has much influence. I use the default configuration to run Kitti data set without IMU data, and the effect is very good.

xuwuzhou commented 4 years ago

@zcg3648806 I'm very sorry for the late reply; I have tested the data without IMU too,the result is really good;(But the sequence 01 have a large error ) Then,I must point out that the raw data and the sequence00-10 are not absolutely equal ,the difference is that the raw data is distorted,and the sequence00-10 is undistorted.The results get better partly because of this. And ,the result without IMU lose some points,do you meet the same problem?

SakuraMemoryKnight commented 4 years ago

@xuwuzhou I'm very sorry for the late reply; I don't know if there's a distortion difference between odometry and raw_data. But odometry doesn't have imu data. Besides, I want to ask you a little aside, do you have an algorithm for v-loam? I only found stevenliu216/demo_lidar on github, it works badly and it's not really v-loam in the strict sense.

xuwuzhou commented 4 years ago

@zcg3648806 Sorry,I don't have an algorithm for v-loam.

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.