vegesm / wdspose

19 stars 1 forks source link

Multi-Person Absolute 3D Human Pose Estimation with Weak Depth Supervision

This repo contains the code for the paper Multi-Person Absolute 3D Human Pose Estimation with Weak Depth Supervision.

Prerequisites

The model uses the following packages:

Evaluating on MuPotS-3D

To reproduce the results in the paper, first download the MuPoTS-3D dataset. You'll also need the preprocessed data and model. Extract the downloaded zip in the root folder of the repository. To evaluate the pretrained model, use the following command:
MUPOTS_FOLDER=<path/to/mupots> python3 scripts/eval.py normalized
The above command evaluates the performance of the model trained on normalized MuCo coordinates (see paper for more info). You can swap normalized to unnormalized to evaluate the model trained on the unnormalized coordinates.

Running on new images

You can run the model on new images for which no preprocessed data exists. For that you'll also need Mask-RCNN and HR-net installed:

  1. Install Detectron
  2. Install HR-net
  3. Download the pretraiend MegaDepth model:
    wget -O best_generalization_net_G.pth http://www.cs.cornell.edu/projects/megadepth/dataset/models/best_generalization_net_G.pth
  4. Add the focal length and prinicipal point coordinates to metadata.csv (see examples/metada.csv for an example). The former can be found in the camera specifications, for the latter the center of the image is a good approximation.
  5. Edit the predict.sh script to include the root location of Detectron and HR-net. You might also want to activate/deactivate necessary Python virtualenvs around Detectron or HR-net.
  6. Run the prediction script:
    ./predict.sh examples/imgs examples/metadata.csv
    The output is saved in results.pkl.