FORTH-ModelBasedTracker / MocapNET

We present MocapNET, a real-time method that estimates the 3D human pose directly in the popular Bio Vision Hierarchy (BVH) format, given estimations of the 2D body joints originating from monocular color images. Our contributions include: (a) A novel and compact 2D pose NSRM representation. (b) A human body orientation classifier and an ensemble of orientation-tuned neural networks that regress the 3D human pose by also allowing for the decomposition of the body to an upper and lower kinematic hierarchy. This permits the recovery of the human pose even in the case of significant occlusions. (c) An efficient Inverse Kinematics solver that refines the neural-network-based solution providing 3D human pose estimations that are consistent with the limb sizes of a target person (if known). All the above yield a 33% accuracy improvement on the Human 3.6 Million (H3.6M) dataset compared to the baseline method (MocapNET) while maintaining real-time performance
https://www.youtube.com/watch?v=Jgz1MRq-I-k
Other
831 stars 134 forks source link

I have another question. Can MocapNET convert JSON files generated by OpenPose into BVH files? If so, how can I upload my own OpenPose JSON files to Google Colab and convert them into BVH files? need help。 #124

Open next1foreal opened 2 months ago

next1foreal commented 2 months ago

I have another question. Can MocapNET convert JSON files generated by OpenPose into BVH files? If so, how can I upload my own OpenPose JSON files to Google Colab and convert them into BVH files? need help。

Paritosh97 commented 1 month ago

In the same boat, the problem is the conversion of json to csvs(which is not that trivial because to run the convertor, you have to compile the whole thing). Hopefully someone comes up with a standalone python script for the convertor so everything can be streamlined in colab...