heethesh / lidar_camera_calibration

Light-weight camera LiDAR calibration package for ROS using OpenCV and PCL (PnP + LM optimization)
BSD 3-Clause "New" or "Revised" License
528 stars 117 forks source link

Is it Possible/How to Use MatLab Scripts to Show Transformations of Aligned Points Using Graphs #22

Open Bsting118 opened 3 years ago

Bsting118 commented 3 years ago

I was wondering if there is a way to use the X Y Z translations and the RPY angles from (maybe) the extrinsics.npz to show a graph of aligned points with average transformations. I was thinking that this could be done with MatLab but I am not sure how exactly. I am trying to do this so I can verify results and show accuracy of the calibration.

I am also curious as to what the Rotation Matrix is. Are those numbers in the Rotation Matrix the average rotations or the final rotations? If the rotation matrix consists of the average rotations, then how can I get the final rotations(I believe I need the initial rotation from lidar and camera * the rotation average)? Also, are the X Y Z translations from T.npy the average translations?

Is there also a way to get the RMSE, or the root mean squared error, between the 3D points viewed by the camera and lidar after applying the transformations?

Bsting118 commented 3 years ago

Additionally, is there a way to display, show, or calculate the possible errors of calibration like the RMSE?

heethesh commented 3 years ago

You can compute the RMSE error at this point. Here's a rough snippet to get started (untested):

points2D_reproj = cv2.projectPoints(points3D, rotation_vector, translation_vector, camera_matrix, dist_coeffs)[0].squeeze(1)
assert(points2D_reproj.shape == points2D.shape)
error = points2D_reproj - points2D
rmse = np.sqrt(np.mean(error[:, 0] ** 2 + error[:, 1] ** 2))

I am not sure what you mean by "average transformations" or "average rotations". Rotation matrix is the matrix corresponding to the Euler angles (they represent the same rotation in different forms). The translation vector and Euler angles is the final transform and the only transform. Check the README to visualize the projection and update this line with X Y Z Y P R values.

Bsting118 commented 3 years ago

Ah, ok. The average transformations and rotations I was just wondering if it was final or average; clearly it's final. By the way, when updating the line with X Y Z Y P R values on the display launch script, do we also have to modify or add something for the Rotation matrix or R.npy? For the X Y Z Y P R, I only used the T.npy for the X Y Z and the euler.npy for the Y P R (RPY but reversed). Please confirm if I have to use R.npy or Rotation matrix at all...only seems I need to use T.npy and euler.npy data/values.

heethesh commented 3 years ago

Yes you are right you only need the Euler angles. The rotation matrix is a more consistent way of representing rotations and does not have ambiguities such as that associated with Euler and is only maintained for debugging.

Bsting118 commented 3 years ago

Which values in the lidar_camera_calibration data are the RGB 2D values/output? Are the pcl_corners.npy and img_corners.npy the RGB 2D values? If not, where can I find the RGB 2D outputs from calibration?

heethesh commented 3 years ago

RGB values themselves are not stored anywhere, the image 2D points are stored in img_corners.npy and the corresponding 3D points are in pcl_corners.npy

Bsting118 commented 3 years ago

Is there anyway to calculate the RGB values manually from the points stored?

heethesh commented 3 years ago

Just store the RGB values here along with the image coordinates

Bsting118 commented 3 years ago

Do I need just an assignment operator and variable or do I need a compound operator for that segment of code?

heethesh commented 3 years ago

Maybe these OpenCV tutorials and reading up on working with Numpy arrays will help you get started. You can just use a python list to store the RGB values accessed from the image.

Bsting118 commented 3 years ago

Okay, thank you for the help!

yulan0215 commented 3 years ago

You can compute the RMSE error at this point. Here's a rough snippet to get started (untested):


points2D_reproj = cv2.projectPoints(points3D, rotation_vector, translation_vector, camera_matrix, dist_coeffs)[0].squeeze(1)

assert(points2D_reproj.shape == points2D.shape)

error = points2D_reproj - points2D

rmse = np.sqrt(np.mean(error[:, 0] ** 2 + error[:, 1] ** 2))

I am not sure what you mean by "average transformations" or "average rotations". Rotation matrix is the matrix corresponding to the Euler angles (they represent the same rotation in different forms). The translation vector and Euler angles is the final transform and the only transform. Check the README to visualize the projection and update this line with X Y Z Y P R values.

Hi, I used this code to check rmse but the result was not acceptable for example, the first result was 9 and the second was 27.

heethesh commented 3 years ago

Hi, I used this code to check rmse but the result was not acceptable for example, the first result was 9 and the second was 27.

RMSE computed over how many points? Is this with or without the LM refinement step (OpenCV > 4.1)?

yulan0215 commented 3 years ago

Hi, I used this code to check rmse but the result was not acceptable for example, the first result was 9 and the second was 27.

RMSE computed over how many points? Is this with or without the LM refinement step (OpenCV > 4.1)? I used OpenCV4.2. And I computed 6 pairs of point cloud and pixels firstly then I got the result 9... Besides, the code I added was like this: image

Can you give me some ideas about it? Thank you very much and I am looking forward to your reply!

heethesh commented 3 years ago

9 pixel RMSE is reasonable for 6 correspondences, I would recommend to use > 30-40 correspondences. Note that this evaluation computes reprojection error over the outlier points from PnPRansac as well which might increase the RMSE even though the transform estimates are better. Try to use only the inlier points and also run LM optimization only on the inliers from PnPRansac. The fourth return argument of solvePnPRansac is the inlier mask, I will probably update the script to use only inliers for LM refinement step. You can go ahead and try this out.

yulan0215 commented 3 years ago

9 pixel RMSE is reasonable for 6 correspondences, I would recommend to use > 30-40 correspondences. Note that this evaluation computes reprojection error over the outlier points from PnPRansac as well which might increase the RMSE even though the transform estimates are better. Try to use only the inlier points and also run LM optimization only on the inliers from PnPRansac. The fourth return argument of solvePnPRansac is the inlier mask, I will probably update the script to use only inliers for LM refinement step. You can go ahead and try this out.

Thank you for your reply and I am looking forward to your update. One more question which is not related to this questiom: should I install the library Eigen? I saw most example online, when they used the function like: cv2.solvePnPRANSAC, they will install Eigen for the computation of the matrix.

heethesh commented 3 years ago

Thank you for your reply and I am looking forward to your update.

Try this patch from the branch inliers-reprojection-error and let me know if this works, I'll merge it in.

One more question which is not related to this questiom: should I install the library Eigen? I saw most example online, when they used the function like: cv2.solvePnPRANSAC, they will install Eigen for the computation of the matrix.

Your OpenCV Python library probably already dynamically links to the Eigen libraries on your system, without which you would not be able to use that function.

yulan0215 commented 3 years ago

Thank you for your reply and I am looking forward to your update.

Try this patch from the branch inliers-reprojection-error and let me know if this works, I'll merge it in.

One more question which is not related to this questiom: should I install the library Eigen? I saw most example online, when they used the function like: cv2.solvePnPRANSAC, they will install Eigen for the computation of the matrix.

Your OpenCV Python library probably already dynamically links to the Eigen libraries on your system, without which you would not be able to use that function.

Hi, the code you provided worked... But rmse was very high: image The terminal of calibration is shown above, when I selected 7 pairs of pixel and point cloud in one frame, it gave me warning: "Initial estimation unsuccessful, skipping refinement." Besides, only the Reprojection error before LM refinement was shown in the terminal. Thank you very much and I am looking forward to your reply!

yulan0215 commented 3 years ago

Thank you for your reply and I am looking forward to your update.

Try this patch from the branch inliers-reprojection-error and let me know if this works, I'll merge it in.

One more question which is not related to this questiom: should I install the library Eigen? I saw most example online, when they used the function like: cv2.solvePnPRANSAC, they will install Eigen for the computation of the matrix.

Your OpenCV Python library probably already dynamically links to the Eigen libraries on your system, without which you would not be able to use that function.

Hi, the code you provided worked... But rmse was very high: image The terminal of calibration is shown above, when I selected 7 pairs of pixel and point cloud in one frame, it gave me warning: "Initial estimation unsuccessful, skipping refinement." Besides, only the Reprojection error before LM refinement was shown in the terminal. Thank you very much and I am looking forward to your reply!

I am sorry that I have one more question that in the function: solvePnPRANSAC, I do not know in this part, the algorithm UPnP or EPnP was used in the calibration, if you know the principle could you pls tell me? Thank you very much!

heethesh commented 3 years ago

Please see the documentation for more details here. If the initial estimation is bad, this probably means you had poor or insufficient correspondences, again see the documentation why it might have failed. Yes, if LM refinement does't run, it won't compute the RMSE there. This method from OpenCV also returns the reprojection error.

yulan0215 commented 3 years ago

Please see the documentation for more details here. If the initial estimation is bad, this probably means you had poor or insufficient correspondences, again see the documentation why it might have failed. Yes, if LM refinement does't run, it won't compute the RMSE there. This method from OpenCV also returns the reprojection error.

I will check it later, thx!

yulan0215 commented 3 years ago

Please see the documentation for more details here. If the initial estimation is bad, this probably means you had poor or insufficient correspondences, again see the documentation why it might have failed. Yes, if LM refinement does't run, it won't compute the RMSE there. This method from OpenCV also returns the reprojection error.

Hi. Thank you for your suggestion and I solved this problem. However, when I did calibration, another error occurred: image I followed the code you provided yesterday but this problem occurred.

heethesh commented 3 years ago

can you print error array? what is its shape (error.shape)?

yulan0215 commented 3 years ago

can you print error array? what is its shape (error.shape)?

image The error.shape and error was shown above, thx!

yulan0215 commented 3 years ago

can you print error array? what is its shape (error.shape)?

I am sorry that do you have any literatures related to this lidar camera spatial calibration? Thx!

heethesh commented 3 years ago

The error.shape and error was shown above, thx!

Your error array has an extra dimension. I did have a .squeeze(1) applied on axis 1. Seems like you previously were able to compute the RMSE without any issues. Is your code/implementation different from master now?

yulan0215 commented 3 years ago

The error.shape and error was shown above, thx!

Your error array has an extra dimension. I did have a .squeeze(1) applied on axis 1. Seems like you previously were able to compute the RMSE without any issues. Is your code/implementation different from master now?

No, I used the code you updated and I just modified the fov of point cloud. I have another question that do you know how to change the size of point cloud when I reprojected point cloud into image via display_camera_lidar_calibration.launch? The point cloud was so small. Thx!