Open kptrifork opened 5 months ago
Hi Kevin,
This code is only using 2D images and actually it´s much more common (from what I've seen) to do hand-eye calibration with 2D images. At the time I also tried using point clouds since the camera I was using also gave point cloud information, but I actually found the calibration using 2D images to be better.
Here's a good video with an explanation on the topic (You don't necessarily have to watch the second half as it is just a demo of how to use the plugin): 3 - MoveIt - Easy Hand Eye Calibration with MoveIt
Overall I used 2D images because:
You can find a direct comparison in this study between 2D and 3D. It might be a bit outdated but I haven´t kept up with new studies: Hand-eye Calibration with a Depth Camera: 2D or 3D?
Best, João
Hi Jones,
Thank you for sharing the GitHub repository.
I'm quite new to the concept of Hand-Eye Calibration, even though I have a degree in Robotics Engineering.
Does your code include 3D data point clouds, or does it only work with 2D images? I was under the impression that hand-eye calibration could only be done with a 3D sensor. Am I mistaken about this?
If you have any articles or resources that could help me better understand this topic, I would greatly appreciate it.
Best regards, Kevin