Steps to use MATLAB calibration toolbox for lidar camera calibration
This repo includes the steps to use the MATLAB single camera calibration toolbox and lidar camera calibration toolbox.
It covers the following topics:
The checkerboard used in this repo is a 9x6 checkerboard with 90mm square size, the padding is 60mm, as shown in the following figure.
The data is collected as rosbag files, the lidars and cameras are fixed on a parked vehicle, the checkerboard is hold by a person (it's recommended to use a tripod to hold the checkerboard).
Use this parseRosbag.m to convert the rosbag file to png and pcd files. This will synchronize the lidar point cloud and camera image, and downsample the point cloud and image to the specified rate. This script is based on this link. Here you need to modify the following parameters:
rosbag_path
: the rosbag file pathsave_path
: the directory to save the filesimageTopic
: the image topic namelidarTopic
: the lidar topic namedownsample_rate
: the downsample rate for the lidar point cloud and image, typically the lidar runs at 10Hz, it's not necessary to save all the point clouds and images, so we can downsample the point cloud and image to 1Hz or 0.5Hz.Checkerboard
as the pattern, and 90mm
as the square size, choose Image distortion
as High
Options
, choose 2 Coefficients
for Radial distortion
, and Tangential distortion
as Enabled
, this is the commonly used plumb bob modelData Browser
, and then Calibrate
Use the undist_all_images.m script to undistort the images, it also generate a new undistCameraParams
to store the parameters for the undistorted images, which will be used in the lidar camera calibration.
Here you need to modify the following parameters:
You can also run the convert_intrinsic_into_yaml.m to convert the intrinsic parameters into yaml file, which can be used in the image_proc ROS package.
Before you start the lidar camera calibration, you are recommended to check the following things:
UndistImages
folder, delete the images that are not good, like the images with the checkerboard not fully visible. Even though the calibration toolbox can tell the bad images, it's better to remove them manually.UndistImages
folder.Checkerboard
as the pattern, and 90mm
as the square size, also specify the padding
size as 60mm
Use Fixed Intrinsic
for the camera, and load the undistCameraParams.mat
fileRemove Ground
option, if you have already cropped the point clouds using the crop_points.m scriptDetect Checkerboard
first, if it fails, you can try to adjust the Cluster Threshold
and Dimension Tolerance
to make it work
Select Checkerboard
manually, where you can select the checkerboard plane points manually. If you didn't crop the points, you may struggle to tune the view angle to select the checkerboard plane points, so it's recommended to crop the points first.Calibrate
, after the calibration, you can check the reprojection error, and then export the calibration result.Initial Transform
to make the checkerboard plane more parallel with the lidar scanning plane. For Initial Transform
, this is necessary when the orientation of the two sensors are not aligned, you need to use the rigid3d function to define a rigid3d class for initial transform. Please note the T
used in Matlab is not the common way as we use:Through the Lidar Camera Calibration, we can get the transformation matrix that transforms the lidar points into the camera frame.
However, the transformation matrix is not in the format that can be used in ROS. So we need to convert the transformation matrix into the format that can be used in ROS. Here we use the convert_tform_to_ROS.m script to convert the transformation matrix into yaml file, where you can use it in the static transform publisher to publish the transformation between the lidar and camera frame.
The desired format is shown below:
static_transform_publisher x y z yaw pitch roll frame_id child_frame_id period_in_ms
(yaw is rotation about Z, pitch is rotation about Y, and roll is rotation about X) in radians.
The frame_id is typically the lidar frame, and the child_frame_id is typically the camera frame. So the translation vector is the position of the camera frame in the lidar frame, and the rotation vector is the rotation of the camera frame in the lidar frame.