russoale / hmr2.0

End-to-end Recovery of Human Shape and Pose with tensorflow 2.0
Other
116 stars 33 forks source link

Domain Analysis of End-to-end Recovery of Human Shape and Pose

[Python 3.6]() Test License: MIT

This repository is part of the master thesis with the title: "Analysis of Domain Adaptability for 3D Human Pose Estimation with Explicit Body Shape Models" based on End-to-end Recovery of Human Shape and Pose.

This repository contains:

According to Kanazawa et al. [1], new keypoints can easily be incorporated with the mesh representation by specifying the corresponding vertexID. This feature makes the HMR framework very powerful. However, the definition of a general joint of the human body is more complex than just a single point on the surface. To address this problem, a new keypoint annotation tool has been developed that allows to generate a new keypoint regressor of a more complex keypoint. A detailed description of the annotation process and an installation guide can be found in keypoint_annotation_tool/README.md.

tool

Requirements

Installation

Recommended Environment Setup

If you already have your environment set up, skip this and continue with Install Requirements.

  1. Install virtualenv and virtualenvwrapper:

    pip3.x install --upgrade --user virtualenv
    pip3.x install --upgrade --user virtualenvwrapper
  2. Add following to your .bashrc or .zshrc:

    export WORKON_HOME=$HOME/.virtualenvs
    # (optional) set export paths to your local python and virtualenv 
    # export VIRTUALENVWRAPPER_PYTHON=/usr/local/bin/python3.x 
    # export VIRTUALENVWRAPPER_VIRTUALENV=/usr/local/bin/virtualenv
    source /usr/local/bin/virtualenvwrapper.sh
  3. set up the virtual environment

    mkvirtualenv hmr2.0
    workon hmr2.0
    pip install -U pip

Install Requirements

run demo

  1. Create logs folder
    mkdir -p logs/paired(joints) && logs/unpaired
  2. Download and unpack one of the pre trained models in the appropriate folder:

  3. (optional) All Models ending with (LSP + toes) need the toes regressors
    • link or copy the regressors folder from keypoint_annotation_tool to the models folder:
      cp -r /keypoint_annotation_tool/regressors
      # or
      cd models && ln -s ../keypoint_annotation_tool/regressors
  4. Run demo cd src/visualise

    python demo.py --image=coco1.png --model=base_model --setting=paired\(joints\) --joint_type=cocoplus --init_toes=false

    demo image

    python demo.py --image=lsp1.png --model=base_model --setting=paired\(joints\) --joint_type=cocoplus --init_toes=true

    demo image

Note! No camera applied on the Mesh Overlay - Trimesh doesn't support orthographic projection)

Training

  1. Convert datasets into TF Record format, see datasets_preprocessing/README.md
  2. Update ROOT_DATA_DIR in src/main/config.py
  3. (optional) Run src/visualise/notebooks/inspect_chekpoint.ipynb to update samples count in config for correct display of progress bar (requires jupyter installation)
  4. Run training
    cd src/main
    python model.py > train.txt 2>&1 &!

Training takes up to 2 days on a RTX 2080 Ti GPU!

Evaluation

See eval/README.md

Source

[1] Angjoo Kanazawa, Michael J. Black, David W. Jacobs, and Jitendra Malik. “End-to-end Recovery of Human Shape and Pose”. In: Computer Vision and Pattern Recognition (CVPR). 2018
[2] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black. “SMPL: A Skinned Multi-Person Linear Model”. In: ACM Trans. Graphics (Proc. SIGGRAPH Asia) 34.6 (Oct. 2015), 248:1– 248:16.