This repo is the official implementation of paper: NWPU-Crowd: A Large-Scale Benchmark for Crowd Counting. The code is developed based on C^3 Framework.
Compared with the original C^3 Framework,
These features will be merged into C^3 Framework as soon as possible.
Prerequisites
requirements.txt
, run pip install -r requirements.txt
.Installation
git clone https://github.com/gjy3035/NWPU-Crowd-Sample-Code.git
Data Preparation
*zip
files in turns and place images_part*
into a folder. Finally, the folder tree is below:
-- NWPU-Crowd
|-- images
| |-- 0001.jpg
| |-- 0002.jpg
| |-- ...
| |-- 5109.jpg
|-- jsons
| |-- 0001.json
| |-- 0002.json
| |-- ...
| |-- 3609.json
|-- mats
| |-- 0001.mat
| |-- 0002.mat
| |-- ...
| |-- 3609.mat
|-- train.txt
|-- val.txt
|-- test.txt
|-- readme.md
./datasets/prepare_NWPU.m
using Matlab. __C_NWPU.DATA_PATH
in ./datasets/setting/NWPU.py
with the path of your processed data.config.py
and ./datasets/setting/NWPU.py
(if you want to reproduce our results, you are recommended to use our parameters in ./saved_exp_para
).python train.py
.tensorboard --logdir=exp --port=6006
.We only provide an example to forward the model on the test set. You may need to modify it to test your models.
test.py
:
LOG_PARA
, the same as __C_NWPU.LOG_PARA
in ./datasets/setting/NWPU.py
.dataRoot
, the same as __C_NWPU.DATA_PATH
in ./datasets/setting/NWPU.py
.model_path
. python test.py
. We provide the pre-trained models in this link, which is a temporary share point of OneDrive. We will provide a permanent website ASAP.
For an intuitive comparison, the visualization results of these methods are provided at this link. The overall results on val set:
Method | MAE | MSE | PSNR | SSIM |
---|---|---|---|---|
MCNN [1] | 218.53 | 700.61 | 28.558 | 0.875 |
C3F-VGG [2] | 105.79 | 504.39 | 29.977 | 0.918 |
CSRNet [3] | 104.89 | 433.48 | 29.901 | 0.883 |
CANNet [4] | 93.58 | 489.90 | 30.428 | 0.870 |
SCAR [5] | 81.57 | 397.92 | 30.356 | 0.920 |
SFCN+ [6] | 90.65 | 487.17 | 30.518 | 0.933 |
About the leaderboard on the test set, please visit Crowd benchmark.
The Evaluation Python Code of the crowdbenchmark.com
is shown in ./misc/evaluation_code.py
, which is similar to our validation code in trainer.py
.
If you find this project is useful for your research, please cite:
@article{gao2020nwpu,
title={NWPU-Crowd: A Large-Scale Benchmark for Crowd Counting and Localization},
author={Wang, Qi and Gao, Junyu and Lin, Wei and Li, Xuelong},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
doi={10.1109/TPAMI.2020.3013269},
year={2020}
}
Our code borrows a lot from the C^3 Framework, you may cite:
@article{gao2019c,
title={C$^3$ Framework: An Open-source PyTorch Code for Crowd Counting},
author={Gao, Junyu and Lin, Wei and Zhao, Bin and Wang, Dong and Gao, Chenyu and Wen, Jun},
journal={arXiv preprint arXiv:1907.02724},
year={2019}
}
If you use crowd counting models in this repo (MCNN, C3F-VGG, CSRNet, CANNet, SCAR, and SFCN+), please cite them.