ankitdhall / lidar_camera_calibration

ROS package to find a rigid-body transformation between a LiDAR and a camera for "LiDAR-Camera Calibration using 3D-3D Point correspondences"
http://arxiv.org/abs/1705.09785
GNU General Public License v3.0
1.49k stars 460 forks source link

how can i value my calibration resultes? #15

Closed ZiqiChai closed 6 years ago

ZiqiChai commented 7 years ago

I saw that you said you got a result with error about 1-2 cm? So how did you measure its true position so that you can compare it with your calibration result ?

ankitdhall commented 7 years ago

The estimates obtained from lidar_camera_calibration were compared with manually measured values. Obviously, this is more of a chicken-and-egg problem because we want to calibrate, in the first place, to get better estimates of the rotation and translation (which should be better that human accuracy). Comparing the lidar_camera_calibration estimates directly with manually measurements doesn't make much sense, however, it provides a coarse estimate of how "good" these values are.

To make things more concrete we use lidar_camera_calibration to calibrate multiple stereo cameras and then using the transformation we obtain between these, the point clouds from individual cameras are fused. If the calibration is almost perfect then, there will be no hallucinations and parts of the world seen from both cameras (and in both point clouds) should fuse nicely without any artifacts. A red ball, in both point clouds should remain a single red ball (preserving its 3D structure) after fusion with the correct rotation and translation values.

The results of the point cloud fusion can be seen here and here. The discrepancy in the fused point cloud can also be measured if one knows the size of the objects in the scene. These values were monitored over various runs and different experimental setups to find out up to what granularity are the lidar_camera_calibration estimates accurate.

For more technical details please refer to this.

ZiqiChai commented 7 years ago

Hi, I tried something and I found that when I changed the axes the fused point cloud looks much better. In details: I get r,p,y angle from matrix R ,then I changed their directions x->z, y->-x, z->-y. the similar changes should be done with the t vector. Then the point cloud fuse better. So i guess my zed's axes is different with yours? Or did you make some modify like what i did before fuse?

ankitdhall commented 7 years ago

The camera frame axes assumed by lidar_camera_calibration are:
z-axis forward facing, perpendicular to the image plane,
x-axis horizontal, pointing left, and,
y-axis vertical, pointing downwards,
forming a right-handed co-ordinate system.