The implementation of “Towards accurate one-stage object detection with AP-loss”.
git clone https://github.com/cccorn/AP-loss.git
cd AP-loss
pip install pycocotools
pip install opencv-python
mkdir data models results
ln -s $YOUR_PATH_TO_coco data/coco
ln -s $YOUR_PATH_TO_VOCdevkit data/voc
The directories should be arranged like:
├── data
│ ├── coco
│ │ ├── annotations
│ │ ├── images
│ │ │ ├── train2017
│ │ │ ├── val2017
│ │ │ ├── test-dev2017
│ ├── voc
│ │ ├── VOC2007
│ │ ├── VOC2012
models
like:
├── models
│ ├── resnet50-pytorch.pth
| ├── resnet101-pytorch.pth
We use the ResNet-50 and ResNet-101 pre-trained models which are converted from here. We also provide the converted pre-trained models at this link.
bash train.sh
You can modify the configurations in lib/config.py
to change the gpu_ids, network depth, image size, etc.
bash test.sh
We release the AP-loss implementation in PyTorch instead of in MXNet due to an engineering issue: the python custom operator in MXNet does not run in parrallel when using multi-gpus. It is more practical to implement AP-loss in PyTorch, for faster training speed.
If you find this repository useful in your research, please consider citing:
@inproceedings{chen2019towards,
title={Towards accurate one-stage object detection with ap-loss},
author={Chen, Kean and Li, Jianguo and Lin, Weiyao and See, John and Wang, Ji and Duan, Lingyu and Chen, Zhibo and He, Changwei and Zou, Junni},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={5119--5127},
year={2019}
}