jhu-lcsr / handeye_calib_camodocal

Easy to use and accurate hand eye calibration which has been working reliably for years (2016-present) with kinect, kinectv2, rgbd cameras, optical trackers, and several robots including the ur5 and kuka iiwa.
BSD 2-Clause "Simplified" License
543 stars 178 forks source link

Variance of about 1.5cm on the final transform. #15

Closed cthorey closed 6 years ago

cthorey commented 6 years ago

The transforms I have are:

Robot Base -> Forearm link Kinect link (A kinect mounted on the forearm) -> ARUCO marker

The transform I need is: Forearm link -> Optical Tracker Base

screenshot from 2018-04-13 11-16-25

I manage to collect about 500 input pairs. For each one of them, I ensure that the position of the marker is stable. The position is not wobbly. I use the AR_track_alvar package to do the tracking.

To evaluate the precision of the calibration, I then randomly sample 50 pairs, run the solver and iterate a few 100 times. No matter which ARUCO marker/size/tracking precision threshold I use, I always end up with about 1.5 cm variance on the optical tracker position with respect to the forearm link. The final cost never goes below 1e-9 so I guess the solver is not able to converge properly.

What kind of precision is expected when the solver converge down to 1e-14 ? Has anyone experience similar problem in a similar setup? when using a kinect ? The AR track alvar library ?

ahundt commented 6 years ago

I've found AR track alvar is pretty noisy (that's what we use also), but apparently not in your case?

We made a separate small ros node that we use to averages 50 samples from each stationary position in the data collection. Also, the more tilted the tag is away from the camera, the less accurate the tracker is, and I've found ar_track_alvar is particularly vulnerable to transform flips so you have to keep a careful eye on it.

Did you follow all the steps in the troubleshooting section? For example, did you calibrate your camera?

cthorey commented 6 years ago

I get some noise when I was using the quarter High Definition of the RGB image broadcast by the kinect but stable when using the full resolution. Also playing with max_track_error and setting it up very low decrease the noise.

My camera is indeed calibrated. I will try to manually setup marker at a know position and verify that I get the same result from the tracker. Averaging over 50 samples sounds like a good idea to :+1:

What kind of accuracy do you usually get on the transform ? Should it be done to mm ? If you do the calibration twice with two different set of input pairs, how far off the first one is likely to be from the others ?

Thanks for you help,

ahundt commented 6 years ago

The accuracy of your calibration will be highly dependent on your sensor and calibration. We have a primesense which is very similar to the kinectv1 that, based on visual inspection and not an actual measure of the error, we get to ~1 mm precision of Cartesian position. I'm sure if I also spent a little time to estimate the pose of the camera relative to the robot base using the hand-eye calibration I could probably get it down to 1 mm end effector error across the workspace. Right now it is 2-5 mm since we set the camera pose based on a single sample after the calibration, which is fine for our current experiments. photos can be seen at https://github.com/cpaxton/costar_stack

On another project we also have a fusiontrack 500 which looks to have ~0.1mm calibration, also based on visual inspection of the data. There we only need to track physical markers and don't need perception or object recognition.

Basically, the solver will be as good as you can make your data, while there may be newer algorithms that can be a bit more robust to noise, I've found simply checking the setup for loose bolts, wobbly stands, bad calibration, etc is much more important.

cthorey commented 6 years ago

Thanks for the advise, I'd look into the TransformInputPairs collection then !

ahundt commented 6 years ago

Feel free to reopen or post another issue if you run into another problem or have more details to add.

majiqiang commented 5 years ago

hi! i want to know how to install the handeye_calib_camodocal. please help me ! thank you very much !

ahundt commented 5 years ago

You need ros you'll have to google for those instructions.

Use catkin build to build a ros package follow the standard ros instructions: https://catkin-tools.readthedocs.io/en/latest/verbs/catkin_build.html

Then you can use one of approaches in the installation section of the readme.

majiqiang commented 5 years ago

You need ros you'll have to google for those instructions.

Use catkin build to build a ros package follow the standard ros instructions: https://catkin-tools.readthedocs.io/en/latest/verbs/catkin_build.html

Then you can use one of approaches in the installation section of the readme.

thanks for your reply! I try to operate it in ROS, but when I catkin_make the package (handeye_calib_camodocal) this error appears: CMake Error at handeye_calib_camodocal/CMakeLists.txt:30 (find_package): By not providing "FindGlog.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "Glog", but CMake did not find one. Could not find a package configuration file provided by "Glog" with any of the following names: GlogConfig.cmake glog-config.cmake I do something by approaches in the installation section of the readme and i also install these dependencies,such as glog,gflags and so on,how should I do?

ahundt commented 5 years ago

scripts that will do most of this for you are in https://github.com/ahundt/robotics_setup

Most of the issues like you posted here are the kind of thing you can learn about by googling the error message for the package that is missing (glog, gflags, etc) and reading about CMake. This is how installing most C++ software goes, unfortunately, since there isn't really a standard high quality package manager like pip.