beltransen / velo2cam_calibration

Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor Setups. ROS Package.
http://wiki.ros.org/velo2cam_calibration
GNU General Public License v2.0
712 stars 194 forks source link

Can I use this package to calibrate PandarQT which has a different coordinate than velodyne? #35

Closed Git89 closed 3 years ago

Git89 commented 3 years ago

This is a great work. I appreciate it a lot. But I have two questions about this tool. Could you help me to look at it?

Question1: Can I use this package to calibrate PandarQT which has a different coordinate than velodyne?

Question2: Could you also tell me why mono_pattern.launch has this line <node pkg="tf" type="static_transform_publisher" name="camera_ros_tf_$(arg sensor_id)" args="0 0 0 -1.57079632679 0 -1.57079632679 rotated_$(arg frame_name) $(arg frame_name) 10"/>? Is this line transforming camera's coordinate so that it is easy to do calibration? When I evaluate the calibration results from this tool, do I need to consider this extra transform and add it into obtained calibration parameter from this tool? If I delete this transform in code, could you tell me what would happen?

cguindel commented 3 years ago

Thank you! And sorry for the late reply.

  1. Yes, sure. We have internally tested this package with a Pandar64 indeed. Of course, you have to be careful when adjusting the passthrough filters (since the coordinates are different, as you said), but that is all, IIRC.
  2. You guessed correctly: that line is an intermediate transform that we used to align the camera axis with the LiDAR coordinates so that the rotation magnitudes to be estimated by the registration step are (considerably) smaller. And yes, you have to consider that extra transform when using the calibration obtained by our tool; that is why it is included in the calibrated_tf.launch generated at the end of the procedure. If you need to use the calibration outside TF, you can obtain the final transform by composing those two steps (sensor1-rotated_sensor2 and rotated_sensor2-sensor2) via transform matrix multiplication. Strictly speaking, this intermediate step might be no longer necessary as the method supports large angular displacements between sensors, but, based on our experience, we strongly recommend keeping it.

If you need more info (e.g., the questions in #20, which I don't know if are already answered), please tell us.