ethz-asl / hand_eye_calibration

Python tools to perform time-synchronization and hand-eye calibration.
BSD 3-Clause "New" or "Revised" License
438 stars 114 forks source link

Add base_world transform matrix #77

Open laiaaa0 opened 6 years ago

laiaaa0 commented 6 years ago

Hi! I've been using hand-eye calibration to find the transformation between two coordinate systems, but I am interested in the Base-World transformation, since in my setup they are fixed. It would be a good idea to also return the base-World transform (and also output it in another calibration.json file).

ffurrer commented 6 years ago

That's a fair point, feel free to open a PR for this.

NikolausDemmel commented 6 years ago

Hi Fadri (and others). Thanks for providing this tool!

I am also interested in the Base-World transform and in aligned trajectories. I had a look in the code to check how this is computed for visualization / evaluation, and looking at https://github.com/ethz-asl/hand_eye_calibration/blob/5e2b6c9f0ec93ecf5a67d71a2d9eb647ac47adad/hand_eye_calibration/python/hand_eye_calibration/dual_quaternion_hand_eye_calibration.py#L926-L939 it seems that you "simply" re-reference each trajectory with respect to the first pose, which boils to down to aligning the two trajectories by aligning their first pose.

Wouldn't it make more sense, in particular for evaluating the alignment error, but also for visualization, to do some least-squares trajectory alignment based on all poses (Horn algorithm)? Something like implemented in the RGBD benchmark tools (https://github.com/jbriales/rgbd_benchmark_tools/blob/80723c9c9530481ec7dc92d3c3f77575f6a25bec/src/rgbd_benchmark_tools/evaluate_ate.py#L47-L79)? I wounder if there is a specific reason you didn't do that and instead just align world and base frame based on the first pose? Or did I maybe miss something?

In case someone is working on this, the additions I'd love to see are: