This is a pytorch implementation of MultiPoseNet ( ECCV 2018, Muhammed Kocabas et al.)
Run inference on your own pictures.
Prepare checkpoint:
params.ckpt
in file multipose_test.py
. testdata_dir
and results file path testresult_dir
in file multipose_test.py
. Run:
python ./evaluate/multipose_test.py # inference on your own pictures
python ./evaluate/multipose_coco_eval.py # COCO evaluation
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.590
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.791
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.644
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.565
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.636
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.644
Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.810
Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.689
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.601
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.709
# PYTORCH=/path/to/pytorch
# for pytorch v0.4.0
sed -i "1194s/torch\.backends\.cudnn\.enabled/False/g" ${PYTORCH}/torch/nn/functional.py
# for pytorch v0.4.1
sed -i "1254s/torch\.backends\.cudnn\.enabled/False/g" ${PYTORCH}/torch/nn/functional.py
# Note that instructions like # PYTORCH=/path/to/pytorch indicate that you should pick
# a path where you'd like to have pytorch installed and then set an environment
# variable (PYTORCH in this case) accordingly.
If you are using Anaconda, we suggest you create a new conda environment :conda env create -f multipose_environment.yaml
. Maybe you should change the channels:
and prefix:
setting in multipose_environment.yaml
to fit your own Anaconda environment.
source activate Multipose
pip install pycocotools
You can also follow dependencies
setting in multipose_environment.yaml
to build your own Python environment.
Build the NMS extension
cd ./lib
bash build.sh
cd ..
You can skip this step if you just want to run inference on your own pictures using our baseline checkpoint
Make them look like this:
${COCO_ROOT}
--annotations
--instances_train2017.json
--instances_val2017.json
--person_keypoints_train2017.json
--person_keypoints_val2017.json
--images
--train2014
--val2014
--train2017
--val2017
--mask2014
--COCO.json
coco_root
to your own COCO path.params.gpus
to define which GPU device you want to use, such as params.gpus = [0,1,2,3]
. params.save_dir
folder every epoch.python ./training/multipose_keypoint_train.py # train keypoint subnet
python ./training/multipose_detection_train.py # train detection subnet
python ./training/multipose_prn_train.py # train PRN subnet
Prepare checkpoint:
params.ckpt
in file multipose_*_val.py
. Run:
python ./evaluate/multipose_keypoint_val.py # validate keypoint subnet on first 2644 of val2014 marked by 'isValidation = 1', as our minval dataset.
python ./evaluate/multipose_detection_val.py # validate detection subnet on val2017
python ./evaluate/multipose_prn_val.py # validate PRN subnet on val2017
180925:
posenet.py
../lib
.180930:
multipose_detection_train.py
for RetinaNet. multipose_keypoint_*.py
and multipose_detection_*.py
for Keypoint Estimation Subnet and Person Detection Subnet respectively. Remove multipose_resnet_*.py
.1801003:
multipose_prn_train.py
for PRN. multipose_coco_eval.py
for COCO evaluation.181115:
RetinaNet_data_pipeline.py
posenet
.