LeCAR-Lab / human2humanoid

[IROS 2024] Learning Human-to-Humanoid Real-Time Whole-Body Teleoperation. [CoRL 2024] OmniH2O: Universal and Dexterous Human-to-Humanoid Whole-Body Teleoperation and Learning
https://omni.human2humanoid.com/
195 stars 8 forks source link

apply for mdm data #7

Closed Axian12138 closed 4 days ago

Axian12138 commented 1 week ago

Hi, thanks for your great work! in your paper, it shows Figure 4: OmniH2O policy tracks motion goals from a language-based human motion generative model. i wonder if you could share the human motion file (better if with retargeted motion data) generated by mdm? Since i alse tried to track mdm-generated motion but found its somehow terrible (especially in real world). it'll help me a lot! thx

TairanHe commented 4 days ago

There is no motion data actually. We directly use the motion generated by MDM https://github.com/GuyTevet/motion-diffusion-model without any retargeting. Please let me know if you have more questions.

Axian12138 commented 3 days ago

hi, thanks! but i still have 2 questions:

  1. what do u mean 'without any retargeting'? is that because u only need 3-points sparse input, which can get from human motion without any retargeting?
  2. if so, does it mean the 'OmniH2O-3 points' policy could only track motion without lower body movement (e.g. it cant track 'tick a ball')?