vita-epfl / openpifpaf_wholebody

PifPaf extension to detect body, foot, face and hand keypoints.
Other
13 stars 0 forks source link

Important note: No further maintenance

OpenPifPaf WholeBody is now part of the OpenPifPaf main implementation and you can find the respective guide here. Thus, this repo is no longer maintained.

openpifpaf_wholebody

This is an extension to Pifpaf to detect body, foot, face and hand keypoints, which sum up to 133 keypoints per person. The annotations for these keypoints are taken from the COCO WholeBody dataset.
Example outputs and skeleton: Soccer players with superimposed predictions

Image licensed under CC BY 4.0 license. The superimposed poses were predicted with:

python -m openpifpaf.predict 0001soccer.jpeg --checkpoint=shufflenetv2k30-wholebody --show --line-width=2 --decoder=cifcaf:0

Skeleton

Install via pip

You can use pip to install openpifpaf wholebody:

pip3 install openpifpaf_wholebody

This will also automatically install openpifpaf, if it is not already installed.

Visualize the skeleton

Visualize the human poses with 133 keypoints.

python -m openpifpaf_wholebody.constants

Predict

Use the pretrained model to perform a prediction:
python -m openpifpaf.predict 0001soccer.jpeg --checkpoint=shufflenetv2k30-wholebody --show --line-width=1 --decoder=cifcaf:0

Note that --decoder=cifcaf:0 has to be used to use the first decoder. As the pretrained model was trained on two datasets to achieve better performance, it has two decoders.

Train

If you don't want to use the pre-trained model, you can train a model from scratch. To train you first need to download the wholebody into your MS COCO dataset folder:

wget https://github.com/DuncanZauss/openpifpaf_assets/releases/download/v0.1.0/person_keypoints_train2017_wholebody_pifpaf_style.json -O /<PathToYourMSCOCO>/data-mscoco/annotations
wget https://github.com/DuncanZauss/openpifpaf_assets/releases/download/v0.1.0/person_keypoints_val2017_wholebody_pifpaf_style.json -O /<PathToYourMSCOCO>/data-mscoco/annotations

Note: The pifpaf style annotation files were create with Get_annotations_from_coco_wholebody.py. If you want to create your own annotation files from coco wholebody, you need to download the original files from the Coco Whole body page and then create the pifpaf readable json files with Get_annotations_from_coco_wholebody.py. This can be useful if you for example only want to use a subset of images for training.

Finally you can train the model (Note: This can take several days, even on the good GPUs):

time CUDA_VISIBLE_DEVICES=0 python3 -m openpifpaf.train --lr=0.0003 --momentum=0.95 --b-scale=3.0 --epochs=150 --lr-decay 130 140 --lr-decay-epochs=10 --batch-size=16 --weight-decay=1e-5 --dataset=wholebodykp --wholebodykp-upsample=2 --basenet=shufflenetv2k16 --loader-workers=16 --wholebodykp-train-annotations=<PathToYourMSCOCO>/data-mscoco/annotations/person_keypoints_train2017_wholebody_pifpaf_style.json --wholebodykp-val-annotations=<PathToYourMSCOCO>/data-mscoco/annotations/person_keypoints_val2017_wholebody_pifpaf_style.json --wholebodykp-train-image-dir=<COCO_train_image_dir> --wholebodykp-val-image-dir=<COCO_val_image_dir>

Evaluation

To evaluate your network you can use the following command:

time CUDA_VISIBLE_DEVICES=0 python3 -m openpifpaf.eval --checkpoint=shufflenetv2k16-wholebody --force-complete-pose --seed-threshold=0.2 --loader-workers=16 --wholebodykp-val-annotations=<PathToYourMSCOCO>/data-mscoco/annotations/person_keypoints_val2017_wholebody_pifpaf_style.json --wholebodykp-val-image-dir=<COCO_val_image_dir>

Using only a subset of keypoints

If you only want to train on a subset of keypoints, e.g. if you do not need the facial keypoints and only want to train on the body, foot and hand keypoints, it should be fairly easy to just train on this subset. You will need to:

Further informations

For more information refer to the Pifpaf Dev Guide.

License

© 2021 Duncan Zauss

This repository is licensed under the GNU AGPLv3 license. For more information refer to the LICENSE file.