dexsuite / dex-retargeting

https://yzqin.github.io/anyteleop/
MIT License
164 stars 19 forks source link

How To do position retarget like AnyTeleop #14

Open xbkaishui opened 3 months ago

xbkaishui commented 3 months ago

Hi it's a greate work. I sam hand detection retarget logic, how about wrist pose retarget using RBG-D camera ? as it mentions in AnyTeleop paper, can you give some code examples?

how to convert wrist frame to camera frame ?

Thanks

yzqin commented 3 months ago

Hi @xbkaishui

I'm not quite sure I understand your question. Could you clarify what you mean by "wrist pose retarget" in the context of AnyTeleop?

Also, I'm not sure why I would need to convert the wrist frame to the camera frame for teleoperation. Could you provide some more details or context about your question?

xbkaishui commented 3 months ago

Hi @yzqin

Thanks for your quick response, I want to use Wrist Pose Detection result to teleoperation the robot arms

Below is reference from original paper Wrist Pose Detection from RGB-D.

We use the pixel positions of the detected keypoints to retrieve the corresponding depth values from the depth image. Then, utilizing known intrinsic camera parameters, we compute the 3D positions of the keypoints in the camera frame. The alignment of the RGB and depth images is handled by the camera driver. With the 3D keypoint positions in both the local wrist frame and global camera frame, we can estimate the wrist pose using the Perspective-n-Point (PnP) algorithm.

Here, the 3D camera captures depth information. The wrist position can be obtained from keypoints within the wrist coordinate system, and the wrist position can also be observed in the camera coordinate system. Should we create a transformation matrix between the wrist coordinate system and the camera coordinate system here? The wrist pose from the camera's perspective should be this transformation matrix, right?

yzqin commented 3 months ago

Hi @xbkaishui

Apologies for the delayed response; I was traveling.

To clarify, we use the FrankMocap model for wrist pose estimation in this project. However, due to licensing restrictions, we cannot directly release that part of the code within AnyTeleop but you can download FrankMocap yourself for free.

If you're interested in exploring wrist pose detection from RGB-D data, you can find our previous implementation here:

Code: https://github.com/yzqin/dex-hand-teleop/blob/3f7b56deed878052ec733a32b503aceee4ca8c8c/hand_detector/hand_monitor.py#L102

Let me know if you have any other questions!

xbkaishui commented 2 months ago

Hi Yuzhe

I still don't fully understand the entire data collection process. I'm not clear on how to control the robotic arm using my hand. How are the hand coordinates captured by the 3D camera mapped to the robotic arm? can you explain more ?

Thanks

yzqin commented 2 months ago

Hi @xbkaishui

The arm motion control is quiet more complicated than hand retargeting. I am working on some paper submission deadlines for now and I will try to wrap up better documentation about arm later. It requires much more dependencies on the software side and more effort on the tutorial.

xbkaishui commented 2 months ago

ok, got it