Closed blutjens closed 6 years ago
As you said, aruco-mapping uses the distortion coefficients because the node actually process the image data to generate the transform matrices. However, lidar_camera_calibration only uses the camera matrix to (approximately) project the 3D points from the LiDAR in order to mark and select the edges. The real information here is about the 3D points which is obtained from the LiDAR. We do not need to know the camera matrix very accurately for projecting, as that is only to make it easier for marking the LiDAR point cloud. If for instance, you use a slightly different matrix to project the 3D points (for marking), they might look (a little different) visually but the 3D point co-ordinates used are the same.
That makes sense! Basically, all information about the camera comes from the aruco package. The projection matrix input into to lidar_camera_calibration is just auxiliary. Thanks!
Where do we specify the distortion parameters in aruco_mapping? Does it have to come from /camera_info
?
I am using a wide-angle lens which has a lot of distortion, and the calibration results are noticeably worse than when I use a narrower lens.
Does the lidar-camera-calibration package take the camera's intrinsic distortion matrix as input?
The aruco-mapping takes the distortion matrix as input. I wonder if there's an imprecision factor generated, by not including the distortion matrix in lidar-camera-calibration.