koide3 / direct_visual_lidar_calibration

A toolbox for target-less LiDAR-camera calibration [ROS1/ROS2]
https://koide3.github.io/direct_visual_lidar_calibration/
749 stars 116 forks source link

Two cameras and a lidar calibration #118

Closed VeeranjaneyuluToka closed 3 weeks ago

VeeranjaneyuluToka commented 1 month ago

Before opening an issue If the issue is about build errors:

If the issue is about calibration procedure and results:

If the issue is about the algorithm:

Describe the bug We have two cameras and a lidar with 120 degrees horizontal FOV, LiDAR vertical FOV is ~21 degrees and cameras vertical FOV is ~68 degrees. Calibrated cameras using mrcal and i have camera intrinsics.

I tried each camera with lidar by passing associated parameters (more info in commands given below) but cam2 to LiDAR looks to me wrong based on projection.

To Reproduce Steps to reproduce the behavior: The below are the commands that i use to calibrate LiDAR2Cam1: ros2 run direct_visual_lidar_calibration preprocess /tmp/input_bags/inhouse_sensors_data/calibration/ /tmp/preprocessed/cam1 -v -d --camera_intrinsics 1033.560802,1036.428785,717.8100901,465.5386154 --camera_distortion_coeffs -0.1690278926,0.1010354026,3.150606909e-05,-0.0007543540876,-0.03494961136 --camera_model plumb_bob --camera_info_topic /lucid_vision/camera_1/camera_info --image_topic /lucid_vision/camera_1/image --points_topic /ch128x1/lslidar_point_cloud

ros2 run direct_visual_lidar_calibration find_matches_superglue.py /tmp/preprocessed/cam1 ros2 run direct_visual_lidar_calibration initial_guess_auto /tmp/preprocessed/cam1 ros2 run direct_visual_lidar_calibration calibrate /tmp/preprocessed/cam1

LiDAR2Cam2: ros2 run direct_visual_lidar_calibration preprocess /tmp/input_bags/inhouse_sensors_data/calibration/ /tmp/preprocessed/cam2 -v -d --camera_intrinsics 1033.777471,1036.416677,721.7705575,455.3776902 --camera_distortion_coeffs -0.159313419,0.07341628381,-0.0002273937626,-0.0001152917108,-0.003799163098 --camera_model plumb_bob --camera_info_topic /lucid_vision/camera_2/camera_info --image_topic /lucid_vision/camera_2/image --points_topic /ch128x1/lslidar_point_cloud

ros2 run direct_visual_lidar_calibration find_matches_superglue.py /tmp/preprocessed/cam2 ros2 run direct_visual_lidar_calibration initial_guess_auto /tmp/preprocessed/cam2 ros2 run direct_visual_lidar_calibration calibrate /tmp/preprocessed/cam2

Expected behavior The projection should be correct when the calibration is completed. If we check below images, both the images are getting projected in the same place on the LiDAR feed. Would be great help if you can give us any pointers on how to investigate on this?

Screenshots and sample data This projection looks to me correct at least based on cameras and lidar feeds. image

This projection looks to me wrong image

Environment:

Additional context Wondering if there is anything that i missed here.

Seekerzero commented 4 weeks ago

Looks like there are many collected points from lidar, could you verify the whether the dynamic integrator works correctly by adding -dv instead of -v -d? Besides, if you use superglue to make the initial guess, you can draw the matching point out to see if it gets anything wrong.

VeeranjaneyuluToka commented 4 weeks ago

@Seekerzero , thanks for your reply!

It works fine even with -dv but wondering what is the difference between -dv and -v -d? I could generate matches from superglue and they looks to me wrong. Does that mean the data is errorprone and need to regenerate data in a better environment by following recommendations?

Seekerzero commented 3 weeks ago

Hi, Technically, it should be the same; I just tried to verify if the dynamic integrator works. Could you share a lidar_intensites images that generated by the program. If these images have too many spare spaces, the superglue might not be able to find the matches correctly.

VeeranjaneyuluToka commented 3 weeks ago

Hi,

This is how intensity image looks and when we carefully observe the matches from superglue looks wrong.

Intensity image: image

Superglue matcher output: image

We feel that the current intensity image is not good enough, do you think the same and do you have any suggestions on the data generation and environment that we need to have etc...?

Seekerzero commented 3 weeks ago

You may want to try to move the camera-lidar system during data recording to perform a Lidar SLAM of the captured environment to densify the point cloud, so that you could take most of the advantages of the dynamic integrator (-d parameter).

VeeranjaneyuluToka commented 3 weeks ago

@Seekerzero , thanks for your inputs and time. I collected samples with a bit of movements, with a few seconds recording and used manual initialization. I could generate probable extrinsics and really good implementation and visualization tools . Thanks for awesome work.