moveit / moveit_calibration

Hand-eye calibration tools for robot arms.
BSD 3-Clause "New" or "Revised" License
124 stars 70 forks source link

Camera pose estimation is wrong while doing eye-to-hand calibration. #134

Open Bala1411 opened 1 year ago

Bala1411 commented 1 year ago

Hi everyone, I am performing eye-to-hand calibration using 6 DOF cobot and logitech C270 as my usb_camera. In context tab, I have selected sensor frame as usb_cam, Target frame as handeye_target, endeffector frame as link6 and base frame as base_0 in the dropdown menu. I also created marker in the target tab. In sensor configurations I have selected eye-to-hand configuration. I also set the camera initial pose guess by manually the measuring the position of usb camera with respect to base_0 as in the physical setup.(x=0.01 , Y = 0.550, z= 0.710 , rx = -1.57, ry = 2.99, rz = 0.35)

The problem is after taking 4 samples and when go to 5th sample the camera is calibrated and gives the transformation matrix from base_0 to usb_cam. The resulted position and orientation of the camera is very far away from my physical setup.

I have done multiple times but I get wrong position and orientation of the usb camera. Can anyone tell me what I have done wrong or any steps to follow. Screenshot from 2023-05-20 11-08-15 Screenshot from 2023-05-20 11-09-53 Screenshot from 2023-05-20 11-47-54

Thanks in advance.

JStech commented 1 year ago

I see three potential issues:

Bala1411 commented 1 year ago

@JStech Thank You for your valuable reply. Among the three points you mentioned I think the first point is problem in my case because I have tried the rest of the two points. Could you please explain or suggest any steps to follow to solve the camera intrinsic calibration problem?What I want to do to get correct Z-axis as X and Y? What I should do with my camera before hand eye calibration? Thanks in advance.

Bala1411 commented 1 year ago

@JStech I have cleared the issue by setting the cameras intrinsic parameters in the camera_info.yaml file. Everything works fine. I have an another doubt regarding the samples. After taking 5th sample I got a matrix from base to usb_cam. I have take totally 15 samples. After 5th sample for each sample upto 15th sample the matrix keeps changing. My application is pick and place . For this which transformation matrix I want to use either 5th sample matrix or 15th sample matrix?

Mani-Radhakrishanan commented 10 months ago

How much accuracy ,you are getting with this procedure??My robot have threee dof which is not getting good accuracy??

JStech commented 10 months ago

@foreverbala use the calibration obtained after the 15th sample. This uses data from all 15 samples, so it will (probably) be the best.

@Mani-Radhakrishanan unfortunately, three DoF might not be sufficient to solve a calibration. Which three degrees of freedom does your robot have? If I recall correctly, you need to include rotations around two non-parallel axes.

Mani-Radhakrishanan commented 10 months ago

@JStech Thanks for the reply. By default I am taking 15 samples. Two rotations (non parallel revoulte joints) and one prismatic joint.Basically its a R,Theta,Phi .I am getting optimization in both EyeinHand and Eyeto Hand. It is calibrating but the accuracy is not enough.

1.What is the best case accuracy people got so far using moveit? 2.Is it possible to get 1 mm to 3mm accuracy for a robot? 3.How to validate and improve the accuracy ?

Is there any demonstration link you can provide to show how much accuracy we can get??

Mani-Radhakrishanan commented 9 months ago

Also, I performed eyeinhand calibaration with camera mounted on moving joint (THETA) (i.e.The rest of the joint motion does not effect the camera position. In this case the optimized values are very high interms of meter.

What is the minimum number of contraints (joint movement) required in eyeinhand vs eyetohand calibraiton??

JStech commented 9 months ago

Only two DoF are necessary, but they must be non-parallel rotations. A picture of your robot would help, but if "R" is prismatic, and "Theta" is revolute, and then the camera is mounted to that joint (so that "Phi" doesn't move the camera), the solver won't find a unique calibration.