Open raultron opened 6 years ago
Can you put the procedures you use to build using ViSP as a backend, pls?
thanks a lot @raultron! always nice and encouraging to hear good feedback.
@nathanlem1: it would be great if you could create a new issue for each topic, to keep the repo organised.
I am using the ros-visp wrapper, which offers ROS interfaces (topics and services) for the ViSP library. In particular, I am using the hand2eye service.
The wrapper can be installed via apt in Ubuntu, as it is automatically packaged by the ROS community:
sudo apt-get install ros-lunar-vision-visp
After installing, you can start a roscore and the calibration service, as described in the docs.
Not really an Issue, just wanted to thank you. This package helped me with a crucial calibration that I needed on a tight schedule.
Hi, could you please let me know in what way you did the calibration? I am still fighting to get good result, every time that I compute the calibration with a new example I got different results. And all of them are wrong results.
I am performing the eye in hand calibration with a RealSense d435 camera and a UR10 robot.
Thanks in advance
Hi there,
My calibration setup was somehow specific. I have a camera with some spherical markers placed on top of it which are then detected by an Optitrack system with millimeter accuracy. I wanted to have the transform between camera coordinates and spherical markers coordinates.
For this case the camera has to move, and the Aruco marker is fixed in space, I had to setup the topics in the correct way for easy_handeye with a launch file. I placed the Aruco marker on a table looking at the camera and the camera on a tripod and moved it around changing both relative orientation and distance, more than 20 measurements. I was careful that the setup was completely static in each measurement and I tried to cover evenly the working space volume. Easy_handeye gave me then the transform between camera and spherical markers coordinates.
I also performed an additional calibration.
I placed additional spherical markers on the Aruco paper and then I needed the transform from spherical optritrack coordinates to the center of the Aruco marker. For that case I changed the topics with a launch file so now the camera is the one fixed and the Aruco marker is the one that moves.
With those two calibrations I could obtain ground truth measurements for my camera to Aruco marker detection, all the measurements where consistent with what I was measuring so the calibrations were correct.
Thank you very much for your detailed answer, I appreciate it. I hope I can make it and then I will share the experience.
Have a good one. Cesar
@Sinchiguano, did you manage to solve your issues? Off the top of my head, they may be due to a couple of factors, including:
Hey,
Thanks a lot for your detailed explanation. It is really helpful. Everything that you detailed, makes absolute sense since I saw that I was doing most of the stuff wrongly or with less care. I now understand what I was doing wrong according to your advice.
I think with your tricks, the calibration is gonna work.
Best regards Cesar
Hey, I just wanted to thank you for this repository, finally I succeed on the robot-camera calibration. I did the two types of calibration, eye-in-hand and eye-on-base calibration without any problem, but without your tips I would not make it. Best regards Cesar
Kindly thanks to the authors of this excellent repo!
Not really an Issue, just wanted to thank you. This package helped me with a crucial calibration that I needed on a tight schedule.