Closed joeyfreakman closed 3 months ago
Hi @joeyfreakman ,
Thank you for your interest and for the great question. The mapping you asked about is achieved through an offline calibration process. We project the 3D robot coordinates into 2D image coordinates, similar to a conventional camera calibration procedure. Here's a brief overview of our method:
Data Collection: First, we gather pairs of coordinates: 3D robot coordinates (X) and corresponding 2D image coordinates (x). The robot coordinates are predefined in a grid pattern, and when the robot achieves the target position, the 2D coordinates are obtained either by manually marking the end-effector's position or by using computer vision techniques. In our project, we collected data across a 7x7x3 grid within a cubic volume.
Calibration Process: We treat the robot's coordinates as world coordinates. Using OpenCV, we perform image calibration to compute a camera matrix P. This matrix allows us to calculate the 2D image coordinates of the end-effector as x = PX.
This calibration method enables us to accurately visualize and map the robot's actions in two-dimensional space.
Best regards, Xiang
Hi @joeyfreakman ,
I hope my previous comment answers all your questions and I'll go ahead and close this issue for now. Please feel free to reopen it if you still have questions.
Best,
Hi crossway_diffusion Team,
Thanks for your great work. I'm wondering how to visualize the action trajectory as your gif shows, cause it's quite hard to map the coordinate of the end effector onto the images.