Closed gabrielpeixoto-cvai closed 4 years ago
After Long hours of reading the code and trying to understand it. I figured out how to obtain the desired transforms. I will try to explain it below, but if you have any issue let me know.
Default operation (present in this repository): We provide the solver with TF_robotBase2EE and TF_camera2marker and we obtain as result TF_marker2EE.
Eye-in-base (what I want): We have to provide the solver with We provide the solver with TF_EE2robotBase and TF_marker2camera and we obtain as result TF_camera2robotBase.
Eye-in-Hand (I guess, not tested): We have to provide the solver with We provide the solver with TF_robotBase2EE and TF_marker2camera and we obtain as result TF_camera2EE.
Maybe it was a dumb question, but I am quite new to this problem, I have been working on it for a couple weeks now.
Hello,
First of all, I will try to explain my problem. If you need more information, just let me know. And of course, thank you for providing this code.
My setup:
What I want:
Why I want:
I was previously using easy_handeye. The output of their method was accurate but had some rotation errors which led to over 10mm error in conversion from camera frame to robot motion.
I am trying to use your method because it seems more accurate. Since ceres library seems promising into refining a estimate, and the dual quaternion solution seems more robust to rotation errors than the classic ones implemented in VISP.
How I am testing it:
Currently I have a simulation. So I know exactly the transform between camera-robotBase and marker-endEffector, so I can compare the output of your method with my "ground-truth" simulation.
I am not using your workflow exactly, I basically integrated your method estimateHandEyeScrew() in my system (I am doing the correct conversions from TF2 data to Eigen and vice versa). How do I know it is right? because I can estimate the transform from endeffector-marker and it is correct, the same transform you provide an example.
What is my problem?
I cannot estimate the transform camera-robotBase. I am providing the method with pairs of transforms endEffector-robotBase and camera-Marker. The resulting output is messy and very far from the reality. The method does not output convergence error. I am providing over 40 transform pairs. I provided 15, 25, 40 and even 60 samples.
However, when I provide the pairs of transforms robotBase-endEffector and camera-Marker I can obtain the transform marker-endEffector accurately.
I am doing something wrong? I have been struggling with this issue for a couple hours now.