multi person pose estimation using center point detection:
Backbone | AP | FPS | TensorRT Speed | GFLOPs | Download |
---|---|---|---|---|---|
DLA-34 | 62.7 | 23 | - | - | model |
Resnet-50 | 54.5 | 28 | 33 | - | model |
MobilenetV3 | 46.0 | 30 | - | - | model |
ShuffleNetV2 | 43.9 | 25 | - | - | model |
HRNet_W32 | 63.8 | 16 | - | - | model |
HardNet | 46.0 | 30 | - | - | model |
[Darknet53]() | 34.2 | 30 | - | - | model |
[EfficientDet]() | 38.2 | 30 | - | - | model |
git submodule init&git submodule update Please refer to INSTALL.md for installation instructions.
We support demo for image/ image folder, video, and webcam.
First, download the model DLA-34 from the Model zoo and put them in anywhere.
Run:
cd tools; python demo.py --cfg ../experiments/dla_34_512x512.yaml --TESTMODEL /your/model/path/dla34_best.pth --DEMOFILE ../images/33823288584_1d21cf0a26_k.jpg --DEBUG 1
The result for the example images should look like:
cd tools; python evaluate.py --cfg ../experiments/dla_34_512x512.yaml --TESTMODEL /your/model/path/dla34_best.pth --DEMOFILE --DEBUG 0
After installation, follow the instructions in DATA.md to setup the datasets.
We provide config files for all the experiments in the experiments folder.
cd ./tools python -m torch.distributed.launch --nproc_per_node 4 train.py --cfg ../experiments/*yalm
the demo files located in the demo
directory, which is would be a very robust human detection+tracking+face reid system.
MIT License (refer to the LICENSE file for details).
If you find this project useful for your research, please use the following BibTeX entry.
@inproceedings{zhou2019objects,
title={Objects as Points},
author={Zhou, Xingyi and Wang, Dequan and Kr{\"a}henb{\"u}hl, Philipp},
booktitle={arXiv preprint arXiv:1904.07850},
year={2019}
}