Closed HoangLoc1610 closed 2 years ago
Hi, @HoangLoc1610.
I'm not sure if I understand all your points, but I will try to give you an answer to them:
mono_pattern.launch
is intended to provide an intermediate transform during the calibration procedure to account for the different definitions of LiDAR and camera coordinate systems (z-axis pointing upwards in LiDAR vs. z-axis pointing forward in camera), thus reducing the magnitudes of the estimated transform. Therefore, calibration results refer indeed to this intermediate frame. Note that this transform is already included in the resulting calibrated_tf.launch
files to make the process transparent to the user; you should check how it is included there if you use your own TF broadcast implementations. Please refer to #35 for more information about this. Hi,
Thank you very much for your reply. So, it's better to keep this line right? < node pkg="tf" type="static_transform_publisher" name="camera_rostf$(arg sensor_id)" args="0 0 0 -1.57079632679 0 -1.57079632679 rotated_monocam monocam 10"/>
But, it also means, if I got the transform info (roll, pitch, yaw, x,y,z). After convert from point cloud to image coordinate, we need to rotate again one more time with roll, yaw = 1.57079632679 and pitch is zero right? And then can apply projectedPoints to get the 2D x,y in pixel right?
Currently I tried to set < node pkg="tf" type="static_transform_publisher" name="camera_rostf$(arg sensor_id)" args="0 0 0 0 0 0 rotated_monocam monocam 10"/>, but the result was really bad. The red dots should stay in center of 4 holes. But after converting, I got 4 points but two points are very near each other, that's why you will just see only 3 red dots.
So, may be I need to apply the rotation roll, yaw = -1.57079632679 to avoid the magnitudes of transformation. And rotate again 90 deg for roll, yaw right?
Thank you & best regards, Loc Hoang
Yes, you should keep the line with the static publisher sending the rotations as it is set by default. And yes, you will obtain a _LiDAR—rotatedcamera transform, and then you need to apply the fixed _rotatedcamera—camera transform (roll=−π/2, pitch=0, yaw=−π/2) to get the desired LiDAR—camera transform. ROS tf can manage the transform composition for you (this is what happens with the automatically generated calibrated_tf.launch
), but, of course, you can also perform the operation by yourself. The poor accuracy you are obtaining may be due to this; please try again with the default configuration and tell us.
I see. I have take a look again. Basically you want to rotate camera plane to similar with Lidar. So please refer below picture for my lidar coordinate.
It means, I need to rotate camera yaw to -180 deg (-PI). I tried and got what I expected for 4 center points from Lidar to Camera.
However, when I try to project all of Lidar points from pcd file, it return something like this:
Do you doubt any missing from my side? I don't know why it can return something like that.
I took a look again, it seems that on Lidar pattern, you are also rotate the point cloud as well. The four point I used is the Lidar rotated one (which I got from input of registration). That's why when converting to camera coordinate, its return corrects.
Could you please tell us a little bit about the rotation in Lidar?
Follow this, if we want to project original Lidar points to image coordinate, we need to:
I am sorry, I have read carefully source code, the rotation on Lidar is just used to to detect 2Dcircle, then you will rotate back to original point cloud. So please ignore my previous comment.
But I am still don't know why my projection of pcd file return wrong like that. It looks like one more rotation is missing.
Just only change by using RT matrix (generated from rpy,xyz) instead of using tf::transform API, then this is what I have:
It's much better now. I would like to close the issue. Thank you very much for your help and happy new year ^^
hi @HoangLoc1610 , can u post the video of calibration you took, bcs i tried two times , but i am not getting how to proceed with this package.
Hi @THARUN-V,
Sorry for late reply. It's not convenience for me to create a video, but you can understand the flow like this. Assuming sensor 1 is Lidar, sensor 2 is mono camera.
I need to rotate yaw to 180 deg to fit with my Hesai Lidar.
Once finish the matrix from registration, you then need to rotate 2 times.
Thank you, @HoangLoc1610, and sorry for the late reply. I'm glad you could get a satisfactory result with our code. I'm closing this issue now.
Hi Author,
Thank you very much for your contribution. I am trying to apply your software to do auto-calibration between mono camera and lidar 16 channels. We got the output roll, pitch, yaw and x,y,z. Base on this, we have extrinsic matrix RT, and tried to project back all lidar points to image. But the projection return very bad result.
I am deeply looking inside your code, and have some questions, could you please support?
I am really appreciate if you can help. Thank you & best regards, Loc Hoang