This project integrates some project working, example as VideoPose3D,video-to-pose3D , video2bvh, AlphaPose, Higher-HRNet-Human-Pose-Estimation,openpose, thanks for the mentioned above project.
The project extracted the 2d joint key point from the video by using AlphaPose,HRNet and so on. Then transform the 2d point to the 3D joint point by using VideoPose3D. Finally We convert the 3d joint point to the bvh motion file.
You can refer to the project dependencies of video-to-pose3D for setting.
This is the dependencies of the project of video-to-pose3D, and modifyed by me to solve some bug.
./joints_detectors/Alphapose/models/sppe
./joints_detectors/Alphapose/models/yolo
./joints_detectors/hrnet/models/pytorch/pose_coco/
./joints_detectors/hrnet/lib/detector/yolo
./checkpoint
folder./pose_trackers/lighttrack
Please place your video to the .\outputs\inputvideo, and setting the path to the videopose.py, like this
inference_video('outputs/inputvideo/kunkun_cut.mp4', 'alpha_pose')
Waiting some minute, you can see the output video in the \outputs\outputvideo directory,and the bvh file in the \outputs\outputvideo\alpha_pose_kunkun_cut\bvh directory.