This repository is part of the master thesis with the title: "Analysis of Domain Adaptability for 3D Human Pose Estimation with Explicit Body Shape Models" based on End-to-end Recovery of Human Shape and Pose.
This repository contains:
According to Kanazawa et al. [1], new keypoints can easily be incorporated with the mesh representation by specifying the corresponding vertexID. This feature makes the HMR framework very powerful. However, the definition of a general joint of the human body is more complex than just a single point on the surface. To address this problem, a new keypoint annotation tool has been developed that allows to generate a new keypoint regressor of a more complex keypoint. A detailed description of the annotation process and an installation guide can be found in keypoint_annotation_tool/README.md.
If you already have your environment set up, skip this and continue with Install Requirements
.
Install virtualenv
and virtualenvwrapper
:
pip3.x install --upgrade --user virtualenv
pip3.x install --upgrade --user virtualenvwrapper
Add following to your .bashrc
or .zshrc
:
export WORKON_HOME=$HOME/.virtualenvs
# (optional) set export paths to your local python and virtualenv
# export VIRTUALENVWRAPPER_PYTHON=/usr/local/bin/python3.x
# export VIRTUALENVWRAPPER_VIRTUALENV=/usr/local/bin/virtualenv
source /usr/local/bin/virtualenvwrapper.sh
set up the virtual environment
mkvirtualenv hmr2.0
workon hmr2.0
pip install -U pip
tensorflow
(default) or tensorflow-gpu
pip install -r requirements.txt
logs
folder
mkdir -p logs/paired(joints) && logs/unpaired
Download and unpack one of the pre trained models in the appropriate folder:
trained in paired setting (no SMPL parameter supervision due to missing dataset)
trained in unpaired setting
(LSP + toes)
need the toes regressors
cp -r /keypoint_annotation_tool/regressors
# or
cd models && ln -s ../keypoint_annotation_tool/regressors
Run demo cd src/visualise
python demo.py --image=coco1.png --model=base_model --setting=paired\(joints\) --joint_type=cocoplus --init_toes=false
python demo.py --image=lsp1.png --model=base_model --setting=paired\(joints\) --joint_type=cocoplus --init_toes=true
Note! No camera applied on the Mesh Overlay - Trimesh doesn't support orthographic projection)
ROOT_DATA_DIR
in src/main/config.py src/visualise/notebooks/inspect_chekpoint.ipynb
to update samples count in config for correct display of progress bar (requires jupyter installation)cd src/main
python model.py > train.txt 2>&1 &!
Training takes up to 2 days on a RTX 2080 Ti GPU!
See eval/README.md
[1] | Angjoo Kanazawa, Michael J. Black, David W. Jacobs, and Jitendra Malik. “End-to-end Recovery of Human Shape and Pose”. In: Computer Vision and Pattern Recognition (CVPR). 2018 |
[2] | Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black. “SMPL: A Skinned Multi-Person Linear Model”. In: ACM Trans. Graphics (Proc. SIGGRAPH Asia) 34.6 (Oct. 2015), 248:1– 248:16. |