Open sunbin1357 opened 3 years ago
I don't have plans to clean up and share the code any time soon. I can, however, guide you to implement it yourself. The inputs/outputs are the same. It's just a matter of doing the skeleton mapping I describe in the paper and building the code to feed the the pose estimates to our method. Shoot me an email, and I can help making the script.
In the paper, you have tried motion retargetting from human video to mixamo 3D character. If I want to retarget motion from human video to humanoid robot (It should be noted that I only have the topology of the robot, not the motion information),Can I use your method to implement this operation directly?
When retargetting motion from video, we first get an estimate of the pose sequence using a 3D pose estimation method. From that, we use our method to generate joint rotations for the target character in t-pose (rest pose). If you have the robot configuration in rest pose, you could use our method to get the joint rotations from rest pose to imitate the input human motion. Then what you would do is use the robot controller to match the retargetted motion. Let me know if this doesn't makes sense.
Will you release the code of demo, whose input is 3D human pose?