JonesCVBS / HandEyeCalibration-using-OpenCV

Simple opencv implementation for handeye calibration
17 stars 3 forks source link

Input arguments for cv2.calibrateHandEye() #1

Open ThurSN opened 9 months ago

ThurSN commented 9 months ago

Hi, first of all, thanks for your code & example. Just want to ask you about the input arguments you put in cv2.calibrateHandEye(). In HandEyeCalibration_class.py line 68 - 74 it's like this: self.R_cam2gripper, self.t_cam2gripper = cv2.calibrateHandEye( self.R_cam2target, self.T_cam2target, self.R_vecEE2Base, self.tEE2Base, method=i )

However, in the OpenCV reference (https://docs.opencv.org/4.5.4/d9/d0c/group__calib3d.html#gaebfc1c9f7434196a374c382abf43439b) it's defined: cv.calibrateHandEye(R_gripper2base, t_gripper2base, R_target2cam, t_target2cam[, R_cam2gripper[, t_cam2gripper[, method]]]) -> R_cam2gripper, t_cam2gripper

I wonder, did you implement it wrongly? But when I checked your result, the R_cam2gripper and t_cam2gripper for 4 methods are almost similar to each other. And when I followed the input parameters according to OpenCV reference, this is not the case.

Thank you!

Regards, Arthur

JonesCVBS commented 9 months ago

Hey Arthur!

I did this a bit ago so my memory is a bit fuzzy, so excuse me if I got something wrong.

Yes I noticed the same thing and even double checked the result with calibration methods using different programs and got the same result from using the "wrong" order in the OpenCV function. I even double checked in MATLAB and got the same result as with the "wrong" input variables in OpenCV. I also verified the result using different grasping tests with the real life robot and found the result from using the "wrong" order very accurate.

There's three hypothesis for this:

  1. OpenCV has a different standard for their variables. ->Very unlikely since they explain them very clearly
  2. The input for the function in the documentation is in the wrong order. -> Very unlikely since it's used so often, it would be changed by now right?
  3. I did something wrong calculating the transformations. -> I checked this so many times I lost count, showed to different people, tried it in MATLAB. Pretty sure this isn't the case.

I'm not really sure which one is which, I never contacted anyone in OpenCV and didn't really find my help on the issue online and the results were very accurate like this so I used these results for my robot.

If you find an explanation let me know because I'm also curious.

ThurSN commented 9 months ago

Hi JonesCVBS,

Thanks a lot for you reply! This wonders me also, why there's such an error in OpenCV. I think I'll contact OpenCV about this. I'll let you know their response.

Btw, do you still remember what made you put the inputs to this function in the wrong order (and wrong inputs, in the case of R_cam2target & t_cam2target) in the first place?

Cheers, Arthur

ThurSN commented 9 months ago

Hi JonesCVBS,

Another comment on your code: You wrote that the numbers in the filename for the transformation matrix and the image correspond to each other, like color_image001.png <-> T_base2EE_001.npz, and so on. However, the images start with color_image001.png and the transformation matrix starts with T_base2EE_000.npz; I assume these 2 don't correspond to each other. And your code loads and uses all the transformation matrices in the directory (25 of them), while for the images, there are 31 of them and findChessboardCorners() can't find corners in 6 of the image, so the amount of images used is also 25, just happen to be the same as the number of transformation matrices. So I assume that between these pairs of 25, not all pairs of (image, transformation matrix) have a correspondence (or even no correspondence for all of them). But then, when your code is run, it produces almost the same result for 4 methods (as I mentioned previously). How can this be? I guess each image and transformation matrix in the input must correspond to each other?

In fact, after I use calibrateHandEye() with established correspondence between the images and base to end-effector transformation matrix in your example (I added another image named color_image000.png with cropped calibration board, so no corners will be detected), I found that only methods 1 (Park) and 2 (Horaud) have similar results.

Cheers, Arthur

JonesCVBS commented 9 months ago

Hey Arthur,

Appreciate you diving deep into the code and pointing things out. Let me clarify the situation:

  1. Purpose of the Code: Essentially, the code should associate images with their transforms only if the corners are detected in the images. If we have an image without detected corners, its corresponding transform isn’t used.

  2. About the Numbering: Imagine we've got 31 images and 31 transforms. If corners aren’t detected in images 25-31, we should really be working with only the first 24 transforms. The goal is that every transform used has an associated image.

  3. My Oversight: I'll admit, I initially did the matching manually (hence the 31 joint positions). The 25 transforms are, in fact, synced with the images in sequence. So T_base2EE_000.npz matches with color_image001.png. The exact numbers aren’t critical; it's about reading files in sequence.

I've tweaked the code to automate this matching process. Now, even with 31 starting pairs, we get results for 25 matched pairs, due to corner detection.

Thanks again for highlighting this – constructive feedback helps iron out the kinks.

Best, João