Closed johntraynor closed 4 years ago
- I can get an OK calibration (+/- 10mm in most of the axes & < 3px per dot) when I use images & poses which are not that skewed
Depending on the parameters of your camera, I would expect the final residual error to be less than half a pixel. What is the resolution of your camera? Also, where did you get the +/- 10mm number? Do you know the actual location of your camera?
- Has anyone done a calibration with this example?
I haven't done a static camera example in a while, but we will be working on one today as an example of the ChAruCo target detection that @jaberkebile has been working on. We'll upload the example data soon (by next Wednesday at the latest) and use the static camera example to perform the calibration. We'll check and make sure it's working properly
- I've checked all the images and all the correspondences are being collected properly
It is very important that the correspondences are assigned correctly, so it's good to hear you've checked these. Just FYI, for the circle grid targets the pattern always starts in the top left corner (when looking at the target with the large circle on the bottom row on the left) and works its way across the columns in each row until reaching the bottom right circle. The largest circle should always be labeled as the origin
- Any ideas what could be wrong. Why does introducing angled imaged and poses into the same code produce such bad results? (it should make it better)
I think the most straightforward thing we can do to help is to write a few additional unit tests here and here that specifically test the static-camera-moving-target use-case. Perusing through the unit tests we have, it seems like we have actually only written tests for the moving-camera-static-target use-case. There is a possibility that the math is not totally correct. If you want to try to tackle this, it would be much appreciated. Otherwise, I can create the tests over the next couple of days as I find time
@johntraynor @marip8 Three things. First, the expected residual for eye-hand calibration is much higher than for intrinsic calibration mainly because the robot kinematics themselves are inaccurate. Next, it should make no difference whether the target is mounted on the robot or the camera is mounted on the robot. The math is exactly the same. Here, I'm not referring to robo_cal_tools, but to the calibration library. In that code, the transform from the target mount to the camera mount frames are retrieved from TF. Next, I always suspect initial conditions when using robo_cal_tools since these are not obtained from a URDF model that looks approximately correct. Next, your robot calibration may be way off. If so, this will be more pronounced when tilt angles are greater. Sorry I've not had time to look at your images. I'm swamped this week.
Hi guys,
Much appreciate the inputs.
Camera is a 5MB Dalsa genie camera with a 12mm fuji lens
I have measured the distance from the robot to the camera manually and used these as the initial estimates. It's a fairly simple setup with not too much orientation factors to deal with
The correspondences look good. Michael - if you look at the images attached initially do they look like what you would expect? they all follow this same pattern as in the two images I uploaded
I could try and help with the unit testing although I'm not the greatest coder in the world. I'll take a look at what is there and maybe i'll come back to you with what I could possibly do
It would be great to see what results you get using ChAruCo target with this same software. Depending on how you get on I could trial the new code and let you know how I get on.
I'll try the recently added calibration features and see what they they show up
Quick question - could bad initial guesses work for the relatively flat poses but go astray for the steep angles ones?? It would seem strange that I can get a decent calibration with these initial guesses but once I throw the skewed images in, it all goes awry!.
Chris - absolutely no problem on the delayed responses. I'm only too delighted when you do respond as it helps me better understand this whole area.
Hi guys,
I was away for a few days so just catching up. Any luck in running the new ChAruCo target software with the static camera tool example?
Thanks in advance
@johntraynor we finished the calibration and put together an example of static-camera-moving-target extrinsic calibration in #73. For reference, we used a 2 megapixel AVT camera with 6mm lens. The optimization converges with a final squared cost of 1.3 px^2 (1.14 pixels per feature) which is reasonably good. This was also using raw, non-rectified images.
Quick question - could bad initial guesses work for the relatively flat poses but go astray for the steep angles ones? It would seem strange that I can get a decent calibration with these initial guesses but once I throw the skewed images in, it all goes awry!.
Several of the images that we took for this calibration had significant skew, and the optimization still worked correctly. You should take a look at the images uploaded for the example and see how they compare to your skewed images. Something doesn't quite add up with the calibration working for straight-on views but not for skewed ones. It should succeed in both cases and give you very similar results; I would think that adding skewed images could actually help improve the final result.
I would take a look at the example in my fork (cleaned up version of #73 which I'll push onto that PR soon) and see if it looks similar to the way you're constructing your calibration.
One other thing I might mention is that the poses you collect with the images must very accurately represent the configuration of your hardware. I've run into problems before where a previously unnoticed mistake in the URDF model causes a discrepancy between the robot controller reported tool flange transform and the TF reported tool flange transform, resulting in a bad calibration. You might double check that your ROS models and ROS-reported transforms are accurate.
Really appreciate the feedback and data. Has to be something wrong with our poses. I'll run the code with you data set just to make sure it all works well before we look at our data again
One question - We are getting our poses direct from a UR robot over a TCP socket connection through a small robot program we wrote. We are not using ROS to collect the poses so I'm suspecting this be where it is all going wrong? Basically we open a socket to the robot, then request the pose using This function
get_forward_kin()
which basically sends us back the RX, RY, RZ, x, y, z values of the robot and we then save the image at the same time we save the poses. Is this totally the wrong way of getting the correct pose of the robot?
You don't necessarily need to collect pose data from ROS (although there is a good, well-supported ROS driver for UR). Conceptually there should be no reason why collecting data the way you are now wouldn't work.
One tricky aspect is making sure you interpret the Rx/Ry/Rz values correctly. I'm not sure if those are Euler angles (and if they are, in which order they are applied: i.e. XYZ, ZYX, etc.) or scaled axis angle rotations, etc. If you interpret these incorrectly, then that would definitely wreck your calibration results.
@drchrislewis has suggested creating a utility to display the poses in Rviz so you can verify that your robot pose really looked like that before running the calibration. That would be pretty simple to do. Alternatively you could overlay the expected origin of the target onto each image you acquired by projecting the target origin into the image plane using your initial target/camera guesses. If the origin appears to be pretty close to where you expected it to be, then you would have confidence that the data you collected was valid
One benefit of using ROS for calibration data collection is that rct_ros_tools
has a pretty simple utility for getting poses and images and saving them correctly into a data structure that can be easily loaded into a calibration. If you have some experience with ROS, it would probably be worth giving this a shot. Basically you would run the UR ROS driver, move the robot around to the calibration positions (manually with the teach pendant or in an automated way), and use the services defined in this node to collect and save the data
One tricky aspect is making sure you interpret the Rx/Ry/Rz values correctly. I'm not sure if those are Euler angles (and if they are, in which order they are applied: i.e. XYZ, ZYX, etc.) or scaled axis angle rotations, etc. If you interpret these incorrectly, then that would definitely wreck your calibration results.
Yes. UR by default on the TP fi uses axis-angle. I haven't checked what get_forward_kin()
returns, but you really want to make sure you interpret it correctly, as otherwise things-won't-work.
You don't necessarily need to collect pose data from ROS (although there is a good, well-supported ROS driver for UR).
@johntraynor: if you'd like to verify whether your custom script is the cause: the driver can be used to just read data from the robot (such as the current pose) without requiring any setup on the controller itself.
Only when motion should be controlled by the ROS side would you need to install the URCap.
If you do try out the driver, be sure to follow the steps to extract the calibration data, to make sure your URDF corresponds to your particular robot.
Additionally/alternatively: the tool0_controller
frame broadcast by the driver corresponds to the EEF (+ toolframe) position the controller broadcasts itself.
Hi guys, some great help and feedback here so I have lots to go on. I’m pretty convinced it’s the rotation we are inputting in in the wrong format. Can i just get someone to clarify. Does the calibration expect the rotation as roll pitch yaw and not Euler angles prior to conversion to Quaternion? Thanks
Can i just get someone to clarify. Does the calibration expect the rotation as roll pitch yaw and not Euler angles prior to conversion to Quaternion?
Technically, the pose input can be input in any form as long as the user converts it to a 4x4 transformation matrix (Eigen::Isometry3d
). All of the calibration-related classes/structures use this data type to represent a transformation. Eigen has a variety of functions for converting Euler angles, RPY, axis angle rotations, etc. to this type of transformation. There's nothing to stop you from uploading your orientation format and converting it to a Eigen::Isometry3d
If you want to use the YAML loaders that we've created in rct_ros_tools
, then the YAML files need to have orientation in quaternion form
OK that makes total sense
I have been using this code to convert the angle axis from UR to quaternion before I save to the YAML files but I don't think it's correct.
(x, y & z below are the angles axis from the UR robot)
Eigen::Quaterniond q; q = AngleAxisd(x, Vector3d::UnitX())
I'm not totally sure. What you posted above is really equivalent to Euler angles applied XYZ. Sometimes axis angle has 4 values (angle and x/y/z components of the axis vector), and sometimes it has 3 values where the angle is the magnitude of the values and the axis is normalized vector with those values, like this:
Eigen::Vector3d rotation(x, y, z);
double angle = rotation.norm();
Eigen::Vector3d axis = rotation.normalized();
Eigen::Quaterniond q(Eigen::AngleAxisd(angle, axis));
This document on the UR website seems to indicate that the UR controller uses the latter, 3-value method
Or just use the driver ;)
Lesson learnt lads - I thought my approach would be quicker just to get up and going but I was totally wrong. The only good thing is I have learnt alot. Thanks for all your patience.
I was kidding though: it's of course fine to do whatever has your preference.
no you are dead right - I should have gone with the driver in the first place - I would have saved myself and you guys a lot of hassle
Just refactored in the new code that Michael gave me on a small set of 4 images / poses and the calibration results look good. I'll do a proper run tomorrow and hopefully we can close out this issue. Thanks again all
OK lads- just to confirm the problem was how we were saving the robot pose to the yaml files. We ran a new calibration with 15 sets and everything looks good. Thanks for all the help again. I've no doubt i'll be on again with more questions!
@johntraynor Great to hear. We are pretty swamped these past few weeks, and can't be very helpful as a result.
Thanks again - great set of tools
Good to hear that this resolved your issue. Feel free to open other issues as you encounter them, and we'll do what we can to support
Hi guys,
With the great help from Chris I managed to get a calibration using static_camera_example.launch / static_camera_extrinsic.cpp setup in examples. I am conscious of taking up too much of Chris's time so hopefully some one can point me in the right direction on this problem I am having
Now the problem I am facing is the following
I can get an OK calibration (+/- 10mm in most of the axes & < 3px per dot) when I use images & poses which are not that skewed
When I introduce images and poses that are very skewed / angled the whole calibration returns nothing meaningful and the re projection is way way out (200+px per dot). when I remove the skewed images and poses they resemble what I would expect to get back albeit not that accurate
So a few questions
Has anyone done a calibration with this example?
Any ideas what could be wrong. Why does introducing angled imaged and poses into the same code produce such bad results? (it should make it better)
Just some notes
I've attached an example of what I mean by a skewed image and non skewed image toward the bottom of this post
I've checked all the images and all the correspondences are being collected properly
I redid the intrinsic calibration again and I'm pretty happy with the numbers I am getting
The initial guesses seems correct and the run on the less skewed images and poses would seem to back this up
i debugged the calibration code and everything seems fine code wise although it's a black box once it get the the ceres problem solving part
I've been very careful in making sure the pose and images are aligned so I don't believe it's that sort of an issue
Any help with this would be greatly appreciated
Thanks JT
Non skew - these type of images and poses work although the calibration is not the most accurate
Skewed - these type are really throwing the calibration off