Qualtitative result | Paper teaser video |
---|---|
This repository is the official Pytorch implementation of Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video. Find more qualitative results here. The base codes are largely borrowed from VIBE.
TCMR is tested on Ubuntu 20.04 with Pytorch 1.12 + CUDA 11.3 and Python 3.9. Previously, it was tested on Ubuntu 16.04 with Pytorch 1.4 and Python 3.7.10. You may need sudo privilege for the installation.
source scripts/install_pip.sh
If you have a problem related to torchgeometry
, please check this out.
${ROOT}/data/base_data/
.
source scripts/get_base_data.sh
demo.py
.${ROOT}/output/demo_output/
.
python demo.py --vid_file demo.mp4 --gpu 0
Here I report the performance of TCMR.
See our paper for more details.
Download pre-processed data (except InstaVariety dataset) from here.
Pre-processed InstaVariety is uploaded by VIBE authors here.
You may also download datasets from sources and pre-process yourself. Refer to this.
Put SMPL layers (pkl files) under ${ROOT}/data/base_data/
.
The data directory structure should follow the below hierarchy.
${ROOT}
|-- data
| |-- base_data
| |-- preprocessed_data
| |-- pretrained_models
# dataset: 3dpw, mpii3d, h36m
python evaluate.py --dataset 3dpw --cfg ./configs/repr_table4_3dpw_model.yaml --gpu 0
${ROOT}/lib/core/config.py
.# training outputs are saved in `experiments` directory
# mkdir experiments
python train.py --cfg ./configs/repr_table4_3dpw_model.yaml --gpu 0
${ROOT}/experiments/{date_of_training}/
. Change the config file's TRAIN.PRETRAINED
with the checkpoint path (either checkpoint.pth.tar
or model_best.pth.tar
) and follow the evaluation command.exclude motion discriminator
notations.@InProceedings{choi2020beyond,
title={Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video},
author={Choi, Hongsuk and Moon, Gyeongsik and Chang, Ju Yong and Lee, Kyoung Mu},
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)}
year={2021}
}
This project is licensed under the terms of the MIT license.
I2L-MeshNet_RELEASE
3DCrowdNet_RELEASE
TCMR_RELEASE
Hand4Whole_RELEASE
HandOccNet
NeuralAnnot_RELEASE