video2bvh extracts human motion in video and save it as bvh mocap file.
video2bvh consists of 3 modules: pose_estimator_2d, pose_estimator_3d and bvh_skeleton.
BUILD_PYTHON
flag while building.The original models provided by 3d-pose-baseline and VideoPose3D use Human3.6M 17-joint skeleton as input format (See bvh_skeleton/h36m_skeleton.py), but OpenPose's detection result are 25-joint (See OpenPose output.md). So, we trained these models using 2D pose estimated by OpenPose in Human3.6M dataset from scratch.
The training progress is almostly same as the originial implementation. We use subject S1, S5, S6, S7, S8 as the training set, and S9, S11 as the test set. For 3d-pose-baseline, the best MPJPE is 64.12 mm (Protocol #1), and for VideoPose3D the best MPJPE is 58.58 mm (Protocol #1). The pre-trained models can be downloaded from following links.
After you download the models
folder, place or link it under the root directory of this project.
Open demo.ipynb in Jupyter Notebook and follow the instructions. As you will see in the demo.ipynb, video2bvh converts video to bvh file with 3 main steps.
Once get the bvh file, you can easily retarget the motion to other 3D character model with existing tools. The girl model we used is craeted using MakeHuman, and the demo is rendered with Blender. The MakeWalk plugin helps us do the retargeting work.