Closed hynann closed 1 month ago
@hynann Have you solved the problem yet? And how did you manage the global translation? Keeping the original translation is make sense in some animations while doesn't work on others.
@hynann @no-Seaweed I have sync the info to the first author of motion-x++, Yuhong Zhang. He said he has updated the link but not clear why the data on your side is bad. I suggest to move the issue to motion-x.
@hynann @no-Seaweed I have sync the info to the first author of motion-x++, Yuhong Zhang. He said he has updated the link but not clear why the data on your side is bad. I suggest to move the issue to motion-x.
Thanks for your reply! After registering, I downloaded the Motion-X++ data from the link provided in the confirmation email. I've double-checked and confirmed that it should be the correct link. Did you encounter a similar issue when loading the Motion-X++ data for HumanTOMATO?
@hynann Our project has not been trained on Motion-X++. The dataset seems to have significant issues.
@hynann Our project has not been trained on Motion-X++. The dataset seems to have significant issues.
Did you mean it has not been trained on splits 'idea400', 'kungfu', 'music', 'perform'... ?
@hynann Our project has not been trained on Motion-X++. The dataset seems to have significant issues.
Did you mean it has not been trained on splits 'idea400', 'kungfu', 'music', 'perform'... ?
To my understanding, the motion-x and motion-x++ have all these subsets.
@hynann Our project has not been trained on Motion-X++. The dataset seems to have significant issues.
Did you mean it has not been trained on splits 'idea400', 'kungfu', 'music', 'perform'... ?
To my understanding, the motion-x and motion-x++ have all these subsets.
I see, initially they didn't provide the download link of the original Motion-X so I was confused. I just double-checked and found it is available again. I think all my doubts have been addressed. Thank you very much!
Hi, thanks for the great work!
I’m working on converting Motion-X data to the HumanML3D (263-dim) format and encountered an issue with inconsistent root orientations in the first frames of the Motion-X data. After applying canonicalization (a rigid transformation [[1, 0, 0], [0, 0, 1], [0, -1, 0]]) to root translation and orientations, most motions start facing the +z direction in the first frame. However, there are some exceptions where the motions either reverse direction or face the ground.
https://github.com/user-attachments/assets/a46d13a1-1a7e-4ef3-8d47-ba4ff12a60f7
https://github.com/user-attachments/assets/5bf961ad-80c8-4a4d-952a-f0f4a4e19679
One straightforward approach would be to force all motions to face the +z direction in the first frame, but this might lose semantic meaning in specific movements, such as swimming or crawling.
I’m curious about how you handled this issue when training HumanTOMATO. Did you make all motions to face the same direction, or did you use camera parameters to account for the actor's relative direction within the camera frame?