Predict 3d human pose from video
./joints_detectors/Alphapose/models/sppe
./joints_detectors/Alphapose/models/yolo
./joints_detectors/hrnet/models/pytorch/pose_coco/
./joints_detectors/hrnet/lib/detector/yolo
pip install mediapipe
./checkpoint
folder./pose_trackers/lighttrack
./outputs
folder. (I've prepared a test video).video_path
in the ./videopose.py
./outputs
folder.For developing, check ./videopose_multi_person
video = 'kobe.mp4'
handle_video(f'outputs/{video}')
# Run AlphaPose, save the result into ./outputs/alpha_pose_kobe
track(video)
# Taking the result from above as the input of PoseTrack, output poseflow-results.json # into the same directory of above.
# The visualization result is save in ./outputs/alpha_pose_kobe/poseflow-vis
# TODO: Need more action:
# 1. "Improve the accuracy of tracking algorithm" or "Doing specific post processing
# after getting the track result".
# 2. Choosing person(remove the other 2d points for each frame)
./joints_detectors/Alphapose
, ./joints_detectors/hrnet
and ./pose_trackers
as source root../videopose.py
As this script is based on the VedioPose3D provided by Facebook, and automated in the following way:
args = parse_args()
args.detector_2d = 'alpha_pose'
dir_name = os.path.dirname(video_path)
basename = os.path.basename(video_path)
video_name = basename[:basename.rfind('.')]
args.viz_video = video_path
args.viz_output = f'{dir_name}/{args.detector_2d}_{video_name}.gif'
args.evaluate = 'pretrained_h36m_detectron_coco.bin'
with Timer(video_path):
main(args)
The meaning of arguments can be found here, you can customize it conveniently by changing the args
in ./videopose.py
.
The 2D pose to 3D pose and visualization part is from VideoPose3D.
Some of the "In the wild" script is adapted from the other fork.
The project structure and ./videopose.py
running script is adapted from this repo
The other feature will be added to improve accuracy in the future:
@misc{videotopose2021,
author = {Zheng, Hao},
title = {video-to-pose3D},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/zh-plus/video-to-pose3D}},
}