IDEA-Research / HumanTOMATO

[ICML 2024] 🍅HumanTOMATO: Text-aligned Whole-body Motion Generation
https://lhchen.top/HumanTOMATO
Other
289 stars 8 forks source link

body-only feature representation #15

Open Lin-Kayla opened 6 months ago

Lin-Kayla commented 6 months ago

Hi! What's the difference between the tomato representation and humanml3d of the body-only part? As I understand, from the supplementary, that the difference is there's rotation regularization, but I can't find it in your code. And when will the pretrained TMR model be released?

LinghaoChan commented 6 months ago

Hi! What's the difference between the tomato representation and humanml3d of the body-only part? As I understand, from the supplementary, that the difference is there's rotation regularization, but I can't find it in your code. And when will the pretrained TMR model be released?

Yes, you are right. We plan to release it this week!

Lin-Kayla commented 6 months ago

Thank you for your response. I'm excited to see the release of OpenTMA! But I still can't find rotation regularization. Could you tell me where it is?

LinghaoChan commented 6 months ago

Thank you for your response. I'm excited to see the release of OpenTMA! But I still can't find rotation regularization. Could you tell me where it is?

I am not in front of the computer. I remember that the tomato representation is to delete the rotation part. Isn't it?

Lin-Kayla commented 6 months ago

By rotation part, does it mean Joint Rotation representation (rot_data) in Line 308 in motion representation.py? Isn't it still in the final representation?

line 308: rot_data = cont_6d_params[:, 1:].reshape(len(cont_6d_params), -1) line 323: data = np.concatenate([data, rot_data[:-1]], axis=-1)

shunlinlu commented 6 months ago

By rotation part, does it mean Joint Rotation representation (rot_data) in Line 308 in motion representation.py? Isn't it still in the final representation?

line 308: rot_data = cont_6d_params[:, 1:].reshape(len(cont_6d_params), -1) line 323: data = np.concatenate([data, rot_data[:-1]], axis=-1)

After preprocessing using motion_representation.py, you will get all the information. While we discard the rotation as stated in the paper, leaving the vector dimension as 313. You may further process the representation, using

motion = np.concatenate((motion[..., :4+(njoints - 1) 3], motion[..., 4+(njoints - 1) 9: 4+(njoints - 1) 9 + njoints3]), axis=-1) # joints=52

Then, you can use the checkpoints we provide.