beltransen / velo2cam_calibration

Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor Setups. ROS Package.
http://wiki.ros.org/velo2cam_calibration
GNU General Public License v2.0
697 stars 192 forks source link

Need some help for setting camera & lidar launch #43

Closed HoangLoc1610 closed 2 years ago

HoangLoc1610 commented 2 years ago

Hi Author,

Thank you very much for your contribution. I am trying to apply your software to do auto-calibration between mono camera and lidar 16 channels. We got the output roll, pitch, yaw and x,y,z. Base on this, we have extrinsic matrix RT, and tried to project back all lidar points to image. But the projection return very bad result.

I am deeply looking inside your code, and have some questions, could you please support?

  1. What is static_transform_publisher on mono patern file? < node pkg="tf" type="static_transform_publisher" name="camera_rostf$(arg sensor_id)" args="0 0 0 -1.57079632679 0 -1.57079632679 rotated_monocam monocam 10"/> I don't understand why we need to rotate roll and yaw to 90 degree and the registration will use it and rotate 90 degree? Do we must do that?
  2. What's about Lidar, should we rotate the point cloud?
  3. Using image visualize, rviz visualize, we can see all of center points was correct, but still not able to project 3d point to 2d image correctly. Do you have any idea ?

I am really appreciate if you can help. Thank you & best regards, Loc Hoang

cguindel commented 2 years ago

Hi, @HoangLoc1610.

I'm not sure if I understand all your points, but I will try to give you an answer to them:

  1. The static publisher in mono_pattern.launch is intended to provide an intermediate transform during the calibration procedure to account for the different definitions of LiDAR and camera coordinate systems (z-axis pointing upwards in LiDAR vs. z-axis pointing forward in camera), thus reducing the magnitudes of the estimated transform. Therefore, calibration results refer indeed to this intermediate frame. Note that this transform is already included in the resulting calibrated_tf.launch files to make the process transparent to the user; you should check how it is included there if you use your own TF broadcast implementations. Please refer to #35 for more information about this.
  2. LiDAR clouds are not rotated; only camera frames have this intermediate transform to make their coordinate frames similar to the LiDAR ones.
  3. I'm not sure what the problem is here. Could you provide more information, e.g., screenshots? To visualize the projection in RVIZ, sensors' stamps must be synchronized and you must choose the 'Camera' display type.
HoangLoc1610 commented 2 years ago

Hi,

Thank you very much for your reply. So, it's better to keep this line right? < node pkg="tf" type="static_transform_publisher" name="camera_rostf$(arg sensor_id)" args="0 0 0 -1.57079632679 0 -1.57079632679 rotated_monocam monocam 10"/>

But, it also means, if I got the transform info (roll, pitch, yaw, x,y,z). After convert from point cloud to image coordinate, we need to rotate again one more time with roll, yaw = 1.57079632679 and pitch is zero right? And then can apply projectedPoints to get the 2D x,y in pixel right?

Currently I tried to set < node pkg="tf" type="static_transform_publisher" name="camera_rostf$(arg sensor_id)" args="0 0 0 0 0 0 rotated_monocam monocam 10"/>, but the result was really bad. The red dots should stay in center of 4 holes. But after converting, I got 4 points but two points are very near each other, that's why you will just see only 3 red dots. image

So, may be I need to apply the rotation roll, yaw = -1.57079632679 to avoid the magnitudes of transformation. And rotate again 90 deg for roll, yaw right?

Thank you & best regards, Loc Hoang

cguindel commented 2 years ago

Yes, you should keep the line with the static publisher sending the rotations as it is set by default. And yes, you will obtain a _LiDAR—rotatedcamera transform, and then you need to apply the fixed _rotatedcamera—camera transform (roll=−π/2, pitch=0, yaw=−π/2) to get the desired LiDAR—camera transform. ROS tf can manage the transform composition for you (this is what happens with the automatically generated calibrated_tf.launch), but, of course, you can also perform the operation by yourself. The poor accuracy you are obtaining may be due to this; please try again with the default configuration and tell us.

HoangLoc1610 commented 2 years ago

I see. I have take a look again. Basically you want to rotate camera plane to similar with Lidar. So please refer below picture for my lidar coordinate. image

It means, I need to rotate camera yaw to -180 deg (-PI). I tried and got what I expected for 4 center points from Lidar to Camera. image

However, when I try to project all of Lidar points from pcd file, it return something like this: image

Do you doubt any missing from my side? I don't know why it can return something like that.

HoangLoc1610 commented 2 years ago

I took a look again, it seems that on Lidar pattern, you are also rotate the point cloud as well. The four point I used is the Lidar rotated one (which I got from input of registration). That's why when converting to camera coordinate, its return corrects.

Could you please tell us a little bit about the rotation in Lidar? image

Follow this, if we want to project original Lidar points to image coordinate, we need to:

HoangLoc1610 commented 2 years ago

I am sorry, I have read carefully source code, the rotation on Lidar is just used to to detect 2Dcircle, then you will rotate back to original point cloud. So please ignore my previous comment.

But I am still don't know why my projection of pcd file return wrong like that. It looks like one more rotation is missing. image

HoangLoc1610 commented 2 years ago

Just only change by using RT matrix (generated from rpy,xyz) instead of using tf::transform API, then this is what I have: image

It's much better now. I would like to close the issue. Thank you very much for your help and happy new year ^^

THARUN-V commented 2 years ago

hi @HoangLoc1610 , can u post the video of calibration you took, bcs i tried two times , but i am not getting how to proceed with this package.

HoangLoc1610 commented 2 years ago

Hi @THARUN-V,

Sorry for late reply. It's not convenience for me to create a video, but you can understand the flow like this. Assuming sensor 1 is Lidar, sensor 2 is mono camera.

Once finish the matrix from registration, you then need to rotate 2 times.

cguindel commented 2 years ago

Thank you, @HoangLoc1610, and sorry for the late reply. I'm glad you could get a satisfactory result with our code. I'm closing this issue now.