Closed zybermonk closed 10 months ago
Hi, sorry for the late reply. If you are using motion capture data, e.g., bvh, then you can use the bvh offset, and you do not need the inverse kinematics for calculating rotations anymore. You could have a look at this git: https://github.com/sreyafrancis/PFNN, they are processing bvh files, and actually our feature extraction is inspired by their work.
Hi @EricGuo5513 Thank you for the response.
Does this also work for .fbx and .c3d files from the motion capture data? And any suggestions on using the offsets (and what they are) with these files?
Additionally, I have markers (joints) and analogs from .c3d files, in case those are also helpful to get the new_joint_vecs.
Once again, thanks for response, and a great work with HumanML3D.
Looking forward to your reply, Happy new year 🎉
Hi, actually I am not quite familiar with .fbx, .c3d files. I know this work is working on .fbx files: https://github.com/DeepMotionEditing/deep-motion-editing
Although in different forms, these motion capture files typically contain offset, rotations and root positions. HumanML3D data processing is used for motions that DO NOT have preset offset and joint rotations. The consequence is HumanML3D can not be converted back to motion capture files. Given that you already have the offset in motion capture files, you'd better use the offsets and rotations in these files. Then you can convert the new_joint_vecs back to mocap files directly. However, the whole kinematic calculation is too complicated to be explained clearly through just a few texts. I would encourage you to learn how to process these mocap files, especially forward kinematics. There are some github codes doing this.
On Tue, 2 Jan 2024 at 11:19, Mohan Ramesh @.***> wrote:
Hi @EricGuo5513 https://github.com/EricGuo5513 Thank you for the response.
Does this also work for .fbx and .c3d files from the motion capture data? And any suggestions on using the offsets (and what they are) with these files?
Additionally, I have markers (joints) and analogs from .c3d files, in case those are also helpful to get the new_joint_vecs.
Once again, thanks for response, and a great work with HumanML3D.
Looking forward to your reply, Happy new year 🎉
— Reply to this email directly, view it on GitHub https://github.com/EricGuo5513/HumanML3D/issues/115#issuecomment-1874381518, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKRYNB7UPWJSCM2EFWOQQ3LYMRFTVAVCNFSM6AAAAABBCNR262VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZUGM4DCNJRHA . You are receiving this because you were mentioned.Message ID: @.***>
Thank you again for guiding me in the right direction. Will work on your suggestions.
I have so far isolated the data from mocap files in the format similar to the output from raw_pose_processing
notebook, i.e like of those in joints/
folder. Shape is also similar - ( frames, joints, co-ordinates )
I will update further progress here if I manage to get it similar to HumanML3D format.
Successfuly managed to convert .c3d format to new_joints and new_joint_vecs.
Solution:
markers['Points']
raw_pose_processing.ipynb
which will be ready to use for next notebooks in the HumanML3D pipeline.Closing the issue. Cheers.
Hello,
I learnt that the
motion_representation.ipynb
provides conversions of motion features; new_joints and new_joint_vecs. However, this is from theraw_pose_procession_ipynb
output in the./joints
folder.I have custom motion capture data, which is already in the shape of 'new_joints' i.e (frames, joints, co-ordinates). I would like to know how to convert these 'new_joints' directly to 'new_joint_vecs'
@EricGuo5513 Please point me in the right direction.
Thank you.