Closed shangzhouye closed 4 years ago
Hi, Thank you for using this package and the detail explanation. I will try my best to help you.
Try placing the target not exactly at 45 degrees. The current configuration is confusion when doing vertex-corner association. I should've mentioned in the readme file. I will update it in the next version.
More bags would definitely makes results better. However, based on what I am seeing now, I think wrong vertex-corner association cased by the target placement is the reason why it is wrong.
It is indeed the Euler angles from the LiDAR frame to the camera frame. Please follow "XYZ" convention. I will update this into next update as well. Based on your image, the initial guess should be [90 90 0].
Please let me know if this works out for you!
Hi,
Thank you for the quick reply!
Hi Bruce,
Thank you! I realized because my lidar z-axis is pointing downward (x pointing forward and y pointing to the right). I will need to sortrows
in ascend order here and here.
I also added if corners(2,2) > corners(2,3) corners = corners(:,[1,3,2,4]); end
to ensure the left corner (with a smaller y value) is put at the second coloum.
I'm able to get a reasonable overlap between image and lidar:
However, the output does not make sense to me. The result (H_LC) has -87.7767, -0.0000, -92.3559 in RPY (T is 0.1722, -0.2121, -0.0052), while the lidar to camera Euler angle should be around [90 90 0] as you mentioned.
The initial guess of the camera and lidar relative position is shown here (the one below is the lidar frame, RGB represents XYZ):
I'm wondering
Thank you.
Great!
The z-axis pointing downward should be fine.
The output is different because you are changing LiDAR vertices represented in the LiDAR frame to LiDAR vertices represented in the camera frame. Therefore, the output is not what you expected. Also, please noticed, Euler angles are not unique!
What's the image resolution of your camera?
Your LiDAR vertices are not that accurate enough. You might want to collect more datasets.
Hi,
I am so sorry!! I accidentally deleted your comments. I meant to delete mine...
This is what you wrote:
"" Thank you!
""
============================ Here is my answer to your comments:
Hi,
Please check this function to project back to the image plane
Then 13.665 RMSE is still a bit large based on your resolution. Please collect more datasets and validation datasets so that you could verify your calibration results.
Hi
If there is no further questions, I will close this issue by the end of today. If you encounter other related issues, please feel free to open this issue again. I will try my best to help you!
Hi!
Thank you for the great calibration package. I collected a 50 seconds rosbag with 5Hz frame rate for both the camera and the lidar for calibration, and there are two LidarTags in the scene. This is my calibration result with an SNR_RMSE larger than 600. (Because the lidar I'm using does not have rings, I have commented out all the parts related to baseline and NSNR.)
I tried the IoU calibration method. It seems to match the left tag in the image with the right tag in the point cloud.
I also tried to translate the point cloud with an initial guess to match the origin of the camera frame with the lidar frame (since the initial guess in justCalibrate.m only has the rotation part). And trained twice on the same bag I have. This is the result I got with an SNR_RMSE at 579.52.
It seems close but based on the RPY I'm getting, the z-axis of the camera is pointing backward (which should be pointing forward).
These are the results of the vertices in the image: