EricGuo5513 / HumanML3D

HumanML3D: A large and diverse 3d human motion-language dataset.
MIT License
808 stars 78 forks source link

new_joints to new_joint_vecs from custom MoCap data #115

Closed zybermonk closed 10 months ago

zybermonk commented 10 months ago

Hello,

I learnt that the motion_representation.ipynb provides conversions of motion features; new_joints and new_joint_vecs. However, this is from the raw_pose_procession_ipynb output in the ./joints folder.

I have custom motion capture data, which is already in the shape of 'new_joints' i.e (frames, joints, co-ordinates). I would like to know how to convert these 'new_joints' directly to 'new_joint_vecs'

@EricGuo5513 Please point me in the right direction.

Thank you.

EricGuo5513 commented 10 months ago

Hi, sorry for the late reply. If you are using motion capture data, e.g., bvh, then you can use the bvh offset, and you do not need the inverse kinematics for calculating rotations anymore. You could have a look at this git: https://github.com/sreyafrancis/PFNN, they are processing bvh files, and actually our feature extraction is inspired by their work.

zybermonk commented 10 months ago

Hi @EricGuo5513 Thank you for the response.

Does this also work for .fbx and .c3d files from the motion capture data? And any suggestions on using the offsets (and what they are) with these files?

Additionally, I have markers (joints) and analogs from .c3d files, in case those are also helpful to get the new_joint_vecs.

Once again, thanks for response, and a great work with HumanML3D.

Looking forward to your reply, Happy new year 🎉

EricGuo5513 commented 10 months ago

Hi, actually I am not quite familiar with .fbx, .c3d files. I know this work is working on .fbx files: https://github.com/DeepMotionEditing/deep-motion-editing

Although in different forms, these motion capture files typically contain offset, rotations and root positions. HumanML3D data processing is used for motions that DO NOT have preset offset and joint rotations. The consequence is HumanML3D can not be converted back to motion capture files. Given that you already have the offset in motion capture files, you'd better use the offsets and rotations in these files. Then you can convert the new_joint_vecs back to mocap files directly. However, the whole kinematic calculation is too complicated to be explained clearly through just a few texts. I would encourage you to learn how to process these mocap files, especially forward kinematics. There are some github codes doing this.

On Tue, 2 Jan 2024 at 11:19, Mohan Ramesh @.***> wrote:

Hi @EricGuo5513 https://github.com/EricGuo5513 Thank you for the response.

Does this also work for .fbx and .c3d files from the motion capture data? And any suggestions on using the offsets (and what they are) with these files?

Additionally, I have markers (joints) and analogs from .c3d files, in case those are also helpful to get the new_joint_vecs.

Once again, thanks for response, and a great work with HumanML3D.

Looking forward to your reply, Happy new year 🎉

— Reply to this email directly, view it on GitHub https://github.com/EricGuo5513/HumanML3D/issues/115#issuecomment-1874381518, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKRYNB7UPWJSCM2EFWOQQ3LYMRFTVAVCNFSM6AAAAABBCNR262VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZUGM4DCNJRHA . You are receiving this because you were mentioned.Message ID: @.***>

zybermonk commented 10 months ago

Thank you again for guiding me in the right direction. Will work on your suggestions.

I have so far isolated the data from mocap files in the format similar to the output from raw_pose_processing notebook, i.e like of those in joints/ folder. Shape is also similar - ( frames, joints, co-ordinates )

I will update further progress here if I manage to get it similar to HumanML3D format.

zybermonk commented 10 months ago

Successfuly managed to convert .c3d format to new_joints and new_joint_vecs.

Solution:

  1. Use a biomechanics library to process .c3d files ( I used Kineticstoolkit )
  2. Read Markers and Analogs from .c3d, analyze the joints, visulaize motion if required.
  3. Select the first 22 SMPL joints to be in sync with HumanML3D, rename the joints for uniformity ( Learn the SMPL structure and map it to .c3d joints data that will be found in markers['Points']
  4. The output will be similar to those of ./joints folder after using raw_pose_processing.ipynb which will be ready to use for next notebooks in the HumanML3D pipeline.

Closing the issue. Cheers.