yuanyuanli85 / Fast_Human_Pose_Estimation_Pytorch

Pytorch Code for CVPR2019 paper "Fast Human Pose Estimation" https://arxiv.org/abs/1811.05419
Apache License 2.0
326 stars 53 forks source link
knowledge-distillation pose-estimation pytorch stacked-hourglass-networks

Fast Human Pose Estimation Pytorch

This is an unoffical implemention for paper Fast Human Pose Estimation, Feng Zhang, Xiatian Zhu, Mao Ye. Most of code comes from pytorch implementation for stacked hourglass network pytorch-pose. In this repo, we followed Fast Pose Distillation approach proposed by Fast Human Pose Estimation to improve accuracy of a lightweight network. We first trained a deep teacher network (stacks=8, standard convolution, 88.33@Mpii pckh), and used it to teach a student network (stacks=2, depthwise convolution, 84.69%@Mpii pckh). Our experiment shows 0.7% gain from knowledge distillation.

I benchmarked the light student model hg_s2_b1_mobile_fpd and got 43fps on i7-8700K via OpenVino. Details can be found from Fast_Stacked_Hourglass_Network_OpenVino

Please check the offical implementation by fast-human-pose-estimation.pytorch

Update at Feb 2019

Results

hg_s8_b1: teacher model, hg_s2_b1_mobile:student model, hg_s2_b1_mobile_kd: student model trained with FPD. hg_s2_b1_mobile_fpd_unlabeled: student model trained with FPD with extral unlabeled samples.

Model in_res featrues # of Weights Head Shoulder Elbow Wrist Hip Knee Ankle Mean GFlops Link
hg_s8_b1 256 128 25.59m 96.59 95.35 89.38 84.15 88.70 83.98 79.59 88.33 28 GoogleDrive
hg_s2_b1_mobile 256 128 2.31m 95.80 93.61 85.50 79.63 86.13 77.82 73.62 84.69 3.2 GoogleDrive
hg_s2_b1_mobile_fpd 256 128 2.31m 95.67 94.07 86.31 79.68 86.00 79.67 75.51 85.41 3.2 GoogleDrive
hg_s2_b1_mobile_fpd_unlabeled 256 128 2.31m 95.94 94.11 87.18 80.69 87.03 79.17 74.82 85.69 3.2 GoogleDrive

Installation

  1. Create a virtualenv

    virtualenv -p /usr/bin/python2.7 pose_venv
  2. Clone the repository with submodule

    git clone --recursive https://github.com/yuanyuanli85/Fast_Human_Pose_Estimation_Pytorch.git
  3. Install all dependencies in virtualenv

    source posevenv/bin/activate
    pip install -r requirements.txt
  4. Create a symbolic link to the images directory of the MPII dataset:

    ln -s PATH_TO_MPII_IMAGES_DIR data/mpii/images
  5. Disable cudnn for batchnorm layer to solve bug in pytorch0.4.0

    sed -i "1194s/torch\.backends\.cudnn\.enabled/False/g" ./pose_venv/lib/python2.7/site-packages/torch/nn/functional.py

Quick Demo

Training teacher network

Training with Knowledge Distillation

Evaluation

Run evaluation to generate mat file

python example/mpii.py -a hg --stacks 2 --blocks 1 --checkpoint checkpoint/hg_s2_b1/ --resume checkpoint/hg_s2_b1/model_best.pth.tar -e

Run tools/eval_PCKh.py to get val score

Export pytorch checkpoint to onnx

python tools/mpii_export_to_onxx.py -a hg -s 2 -b 1 --num-classes 16 --mobile True --in_res 256  --checkpoint checkpoint/model_best.pth.tar 
--out_onnx checkpoint/model_best.onnx 

Here

Reference