koide3 / direct_visual_lidar_calibration

A toolbox for target-less LiDAR-camera calibration [ROS1/ROS2]
https://koide3.github.io/direct_visual_lidar_calibration/
780 stars 124 forks source link

Automatic matching result issue #61

Open jianghaijun007 opened 1 year ago

jianghaijun007 commented 1 year ago

Hello, thank you very much for your incredible work! After using your tool to calibrate the data you provided and the data we collected ourselves, I have several questions:

  1. The calibration results can only display the overlap of the image projected onto the point cloud by adjusting blend_weight, but this is not obvious. In many cases, I do not know the quality of the calibration results. Why not project the point cloud onto the image?
  2. When we manually select matching points between images and point clouds, the grayscale image lacks a lot of detail information. Why not use a color image?
  3. This is the result of using the official provided livox_ros1 data and using the automatic matching method. 2023-11-03 11-35-39屏幕截图 2023-11-03 11-35-47屏幕截图 By adjusting blend_weight, we can see that the overlap between the calibrated image projection and the point cloud is very good. This is the result of using the official provided ouster_ros1 data and using the automatic matching method. 2023-11-03 17-01-27屏幕截图 I can't tell the quality of the calibration results by adjusting blend_weight. Is my result correct?
  4. This is the data we collected, using a repeatedly scanned LiDAR and in an autonomous driving scenario, with sensors installed on the vehicle. In manual matching mode, we collected several segments of data while the vehicle was stationary. However, due to the chaotic scene, it was difficult to select accurate matching points and the calibration error was large. Do I need to collect scene data with more regular and distinct features? 20231102145254_524
    1. In the automatic matching mode, we collected several segments of vehicle movement data, but the matching results were still not ideal. Can you help me analyze the reason? Supplement: The data we collect only includes images and point cloud channels, without camera information. dy_2_1103_3 bag dy_2_1103_3 bag_lidar_intensities dy_2_1103_3 bag_superglue Thank you for your reading and time!
koide3 commented 1 year ago
  1. Because we were interested in creating colored point clouds, we chose projecting image data onto the point cloud. I thinks a visualization with the opposite projection easily be realized using, for example, rviz.

  2. Because the image-point-cloud alignment algorithm works on intensity data, we convert images into mono8. Meanwhile, I think some minor modification makes it possible to show colored images.

  3. The result of ouster looks corrupted. Did you enable dynamic point cloud integration?

  4. The environment itself contains rich geometrical features and looks good, but make sure that there is no dynamic objects.

  5. The accumulated point cloud is too sparse to extract features. I recommend take a longer data with more movement to generate a dense point cloud.