FORTH-ModelBasedTracker / MocapNET

We present MocapNET, a real-time method that estimates the 3D human pose directly in the popular Bio Vision Hierarchy (BVH) format, given estimations of the 2D body joints originating from monocular color images. Our contributions include: (a) A novel and compact 2D pose NSRM representation. (b) A human body orientation classifier and an ensemble of orientation-tuned neural networks that regress the 3D human pose by also allowing for the decomposition of the body to an upper and lower kinematic hierarchy. This permits the recovery of the human pose even in the case of significant occlusions. (c) An efficient Inverse Kinematics solver that refines the neural-network-based solution providing 3D human pose estimations that are consistent with the limb sizes of a target person (if known). All the above yield a 33% accuracy improvement on the Human 3.6 Million (H3.6M) dataset compared to the baseline method (MocapNET) while maintaining real-time performance
https://www.youtube.com/watch?v=Jgz1MRq-I-k
Other
851 stars 136 forks source link

how to rigging vrm / live2d / normal video with the BVH outputs #86

Closed yslion closed 1 year ago

yslion commented 2 years ago

how to rigging vrm / live2d / normal video with the BVH outputs

AmmarkoV commented 2 years ago

Hello sorry for the very long delay responding to your message! I use Makehuman plus this Rig plus The BVH Retargetter plugin for blender to do an "animation" with the BVH outputs and a rigged skinned human model!

AmmarkoV commented 1 year ago

I have added this video that explains how MocapNET output can be rigged and in integrated in Blender YouTube Link