Closed johntraynor closed 4 years ago
1: Take a set of images from multiple angles with a calibration grid where it is attached to the robot end effector tool 2: Record the base pose for each of the images taken (What format should these be in is (x,y,z, rx, ry,rz) acceptable?
I would suggest doing both of these steps using the CLI calibration tool in rct_ros_tools
. Basically it exposes a service for capturing relevant calibration data from the ROS system (i.e. TF transform from a specified base frame to a specified tool frame, and 2D images) and a service for saving that data to a file structure. In your case you would set the tool frame to be the frame to which the target is mounted
3: Define a problem
You should be able to define the problem in almost the same way as the camera on wrist example. Basically you could load the extrinisic data set saved by the CLI tool described above, and assign those objects to the correct parts of the calibration problem struct
4: Add guesses to the problems - not sure on this part - can they be very broad?
As a rule of thumb, the closer you can get your guesses to the actual values the better the calibration will be. If your guess isn't close enough to the true solution, the optimization might end in a local minimum which may not be a very good solution. Generally the final_cost_per_obs
member of the calibration result (i.e. the average squared error of measurement vs. predicted) will give you an indication of how good the calibration was. Typically 2D cameras can detect circle centers with sub-pixel accuracy, so a good final cost per observation might be less than 0.25
6: Do I need to change anything else in this example file - how does it know its an static camera in the cell vs on the robot (or does it market with regards the maths?
We recently updated the extrinsic hand-eye calibration such that it can represent both the static-camera-moving-target and static-target-moving-camera problems. The only thing you need to change is which observation transform the wrist pose is associated with. From here, you should change :
// Let's add the wrist pose for this image as the "to_camera_mount" transform
// Since the target is not moving relative to the camera, set the "to_target_mount" transform to identity
- obs.to_camera_mount = wrist_poses[i];
- obs.to_target_mount = Eigen::Isometry3d::Identity();
+ obs.to_camera_mount = Eigen::Isometry3d::Identity();
+ obs.to_target_mount = wrist_poses[i];
Many thanks for the detailed answer. Really helpful. I'll give this a go.
BTW - Is it OK to post any questions I might encounter here or there a better way of doing this than clogging up the issues?
Thanks again
BTW - Is it OK to post any questions I might encounter here or there a better way of doing this than clogging up the issues?
IMO that's part of what the issues page is for, so I'd say go for it!
Hi guys,
I am trying to use the 5 x 5 circular grid for the calibration but I can't seem to get it to detect any circles in the image. I've used chessboard patterns in the past with no problems but I was hoping to keep your code as is and get it working before making changes. Any idea what is doesn't like about them. Too big, too much clutter?? I have attached an example
Thanks in advance
There are a lot of parameters that you can play with in the circle detector class. The most common issues I seem to run into are:
filterByColor
and circleColor
parameters (in your case, true and 0)minArea
, maxArea
]
If these don't solve your issue, I would suggest looking at the values you are using for the rest of the parameters to see if they make sense for this particular image
In the camera on wrist example, circles are detected at this line, where the findObservations
function uses some default values for the circle detector. You could pass in your own circle detector parameters by using this overload of findObservations
instead.
Many thanks again for your inputs. I had tried min and max area but haven't tried the other ones you mentioned. I'll let you know how I get on
Hi guys, A quick update. I actually had to blur my images before in the code before I could get the circle detector to work. Also had to adjust the max area as suggested above for the blob detector but at least it's finding the circles now. I ran a calibration but the results are not good so I'm doing something stupid!.
I decided to test the code using the .launch files as it provides visual feedback on what is going on This is what I called "roslaunch roslaunch rct_examples camera_on_wrist_example.launch"
I believe I have the following in place .yaml for each of the joint moves .bmp images taken for each of the target positions updated the static_camera_guesses.yaml with what I believe the values should be created an intrinsic file for my camera target.yaml file for the 5x5 with 0.0015m spacing
Have I missed some other step or it more likely to do with the data I have fed the software
I could upload the data and images if there was an easy way to do that? Otherwise any suggestions on how I can best debug
Thanks in advance
A few screen shots that might help
Physical setup - UR3 with a camera
Hi guys,
Just an update on this. I can't see to find where the problem is. I've checked the input data and it all looks correct and is the right units. I also played around with the guesses but doesn't seem to make any difference to the results I am getting. When I look at the re-projection circles they look really small compared to the what they should be. Some circle patterns in images seem to be in the right orientation but they are definitely the wrong scale. Others are the wrong scale and orientation . Anyone able to point me in the direction of how best to debug the problem as I'm not sure where to go next?
Thanks in advance
@johntraynor Sorry you're having so much trouble. I have lots of experience with the calibration you are performing. From your description, it sounds very much like your initial conditions are incorrect. Eye-hand calibration is identical whether camera or target is on EOAT. However if you use TF to get the transform information, you must always listen from to the transform between where the camera is mounted to where the target is mounted in the right direction. If you give it camera to target when you want target to camera, you get the inverse and that can screw everything up. Semantics here are confusing. You want the matrix that multiplies points expressed in the frame on which the target is mounted and expresses them in the frame on which the camera is mounted. I suspect this is what you have wrong. If not, it would be best if you made a zip of your images and pose info and let someone take a look. This can be frustrating. We really need to make an easy to use GUI.
Many thanks for the reply. Really appreciate it. Can I zip up the files and post here. Images are about 5MB each? Thanks in advance
@johntraynor Don't post here. Rather, create a dropbox or something equivalent and send the link to clewis at-symbol swri dottt org. I'll take a look. I really do suspect your initial pose estimates.
Many thanks. I’ll let you know when I send it to you.
I just sent you a link Chris with the data. Let me know if you need anything else Thanks again
Hi guys,
Just trying this code out (thanks for all the good work)
If I was to take the example "camera_on_wrist.cpp" and modify it so that it worked for a static external camera setup what are the main change I would have to make to get a standalone application working. I'm assuming it's the following
1: Take a set of images from multiple angles with a calibration grid where it is attached to the robot end effector tool 2: Record the base pose for each of the images taken (What format should these be in is (x,y,z, rx, ry,rz) acceptable? 3: Define a problem 4: Add guesses to the problems - not sure on this part - can they be very broad? 5: Import intrinsics 6: Do I need to change anything else in this example file - how does it know its an static camera in the cell vs on the robot (or does it market with regards the maths?
Thanks in advance