ankitdhall / lidar_camera_calibration

ROS package to find a rigid-body transformation between a LiDAR and a camera for "LiDAR-Camera Calibration using 3D-3D Point correspondences"
http://arxiv.org/abs/1705.09785
GNU General Public License v3.0
1.49k stars 460 forks source link

Distortion matrix #27

Closed blutjens closed 6 years ago

blutjens commented 6 years ago

Does the lidar-camera-calibration package take the camera's intrinsic distortion matrix as input?

The aruco-mapping takes the distortion matrix as input. I wonder if there's an imprecision factor generated, by not including the distortion matrix in lidar-camera-calibration.

ankitdhall commented 6 years ago

As you said, aruco-mapping uses the distortion coefficients because the node actually process the image data to generate the transform matrices. However, lidar_camera_calibration only uses the camera matrix to (approximately) project the 3D points from the LiDAR in order to mark and select the edges. The real information here is about the 3D points which is obtained from the LiDAR. We do not need to know the camera matrix very accurately for projecting, as that is only to make it easier for marking the LiDAR point cloud. If for instance, you use a slightly different matrix to project the 3D points (for marking), they might look (a little different) visually but the 3D point co-ordinates used are the same.

blutjens commented 6 years ago

That makes sense! Basically, all information about the camera comes from the aruco package. The projection matrix input into to lidar_camera_calibration is just auxiliary. Thanks!

villanuevab commented 6 years ago

Where do we specify the distortion parameters in aruco_mapping? Does it have to come from /camera_info?

I am using a wide-angle lens which has a lot of distortion, and the calibration results are noticeably worse than when I use a narrower lens.