GrokCV / HazyDet

Apache License 2.0
9 stars 3 forks source link

HazyDet: Open-Source Benchmark for Drone-View Object Detection With Depth-Cues in Hazy Scenes

This repository is the official implementation of HazyDet

HazyDet

HazyDet

You can download our HazyDet dataset from Baidu Netdisk or OneDrive.

For both training and inference, the following dataset structure is required:

HazyDet
|-- train
    |-- clean images
    |-- hazy images
    |-- labels
|-- val
    |-- clean images
    |-- hazy images
    |-- labels
|-- test
    |-- clean images
    |-- hazy images
    |-- labels
|-- RDDTS
    |-- hazy images
    |-- labels

Note: Both passwords for BaiduYun and OneDrive is grok.

Leadboard and Model Zoo

All the weight files in the model zoo can be accessed on Baidu Cloud and OneDrive.

Detectors

Model Backbone #Params (M) GFLOPs mAP on
Test-set
mAP on
RDDTS
Config Weight
One Stage
YOLOv3 Darknet53 61.63 20.19 35.0 19.2 config weight
GFL ResNet50 32.26 198.65 36.8 13.9 config weight
YOLOX CSPDarkNet 8.94 13.32 42.3 24.7 config weight
RepPoints ResNet50 36.83 184.32 43.8 21.3 config weight
FCOS ResNet50 32.11 191.48 45.9 22.8 config weight
Centernet ResNet50 32.11 191.49 47.2 23.8 config weight
ATTS ResNet50 32.12 195.58 50.4 25.1 config weight
DDOD ResNet50 32.20 173.05 50.7 26.1 config weight
VFNet ResNet50 32.89 187.39 51.1 25.6 config weight
TOOD ResNet50 32.02 192.51 51.4 25.8 config weight
Two Stage
Sparse RCNN ResNet50 108.54 147.45 27.7 10.4 config weight
Dynamic RCNN ResNet50 41.35 201.72 47.6 22.5 config weight
Faster RCNN ResNet50 41.35 201.72 48.7 23.6 config weight
Libra RCNN ResNet50 41.62 209.92 49.0 23.7 config weight
Grid RCNN ResNet50 64.46 317.44 50.5 25.2 config weight
Cascade RCNN ResNet50 69.15 230.40 51.6 26.0 config weight
End-to-End
Conditional DETR ResNet50 43.55 94.17 30.5 11.7 config weight
DAB DETR ResNet50 43.70 97.02 31.3 11.7 config weight
Deform DETR ResNet50 40.01 192.51 51.9 26.5 config weight
Plug-and-Play
FCOS-DeCoDet ResNet50 34.62 225.37 47.4 24.3 config weight
VFNet-DeCoDet ResNet50 34.61 249.91 51.5 25.9 config weight

Dehazing

Type Method PSNR SSIM mAP on Test-set mAP on RDDTS Weight
Baseline Faster RCNN - - 39.5 21.5 weight
Dehaze GridDehaze 12.66 0.713 38.9 (-0.6) 19.6 (-1.9) weight
Dehaze MixDehazeNet 15.52 0.743 39.9 (+0.4) 21.2 (-0.3) weight
Dehaze DSANet 19.01 0.751 40.8 (+1.3) 22.4 (+0.9) weight
Dehaze FFA 19.25 0.798 41.2 (+1.7) 22.0 (+0.5) weight
Dehaze DehazeFormer 17.53 0.802 42.5 (+3.0) 21.9 (+0.4) weight
Dehaze gUNet 19.49 0.822 42.7 (+3.2) 22.2 (+0.7) weight
Dehaze C2PNet 21.31 0.832 42.9 (+3.4) 22.4 (+0.9) weight
Dehaze DCP 16.98 0.824 44.0 (+4.5) 20.6 (-0.9) weight
Dehaze RIDCP 16.15 0.718 44.8 (+5.3) 24.2 (+2.7) weight

DeCoDet

HazyDet

Installation

Step 1: Create a conda

$ conda create --name HazyDet python=3.9
$ source activate HazyDet

Step 2: Install PyTorch

conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia

Step 3: Install OpenMMLab 2.x Codebases

# openmmlab codebases
pip install -U openmim --no-input
mim install mmengine "mmcv>=2.0.0" "mmdet>=3.0.0" "mmsegmentation>=1.0.0" "mmrotate>=1.0.0rc1" mmyolo "mmpretrain>=1.0.0rc7" 'mmagic'
# other dependencies
pip install -U ninja scikit-image --no-input

Step 4: Install HazyDet

python setup.py develop

Note: make sure you have cd to the root directory of HazyDet

$ git clone git@github.com:GrokCV/HazyDet.git
$ cd HazyDet

Training

 $ python tools/train_det.py configs/DeCoDet/DeCoDet_r50_1x_hazydet.py

Inference

$ python tools/test.py configs/DeCoDet/DeCoDet_r50_1x_hazydet365k.py weights/fcos_DeCoDet_r50_1x_hazydet.pth

We released our checkpoint on HazyDet

Depth Maps

The depth map required for training can be obtained through Metic3D. They can also be acquired through other depth estimation models.

Acknowledgement

We are grateful to the Tianjin Key Laboratory of Visual Computing and Intelligent Perception (VCIP) for providing essential resources. Our sincere appreciation goes to Professor Pengfei Zhu and the dedicated AISKYEYE team at Tianjin University for their invaluable support with data, which has been crucial to our research efforts. We also deeply thank Xianghui Li, Yuxin Feng, and other researchers for granting us access to their datasets, significantly advancing and promoting our work in this field. Additionally, our thanks extend to Metric3D for its contributions to the methodology presented in this article.

Citation

If you use this toolbox or benchmark in your research, please cite this project.

@article{feng2024HazyDet,
    title={HazyDet: Open-source Benchmark for Drone-view Object Detection with Depth-cues in Hazy Scenes}, 
    author={Feng, Changfeng and Chen, Zhenyuan and Kou, Renke and Gao, Guangwei and Wang, Chunping and Li, Xiang and Shu, Xiangbo and Dai, Yimian and Fu, Qiang and Yang, Jian},
    year={2024},
    journal={arXiv preprint arXiv:2409.19833},
}

@article{zhu2021detection,
  title={Detection and tracking meet drones challenge},
  author={Zhu, Pengfei and Wen, Longyin and Du, Dawei and Bian, Xiao and Fan, Heng and Hu, Qinghua and Ling, Haibin},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  volume={44},
  number={11},
  pages={7380--7399},
  year={2021},
  publisher={IEEE}
}