Boosting Object Detection with Zero-Shot Day-Night Domain Adaptation
Zhipeng Du
Β·
Miaojing Shi
Β·
Jiankang Deng
PyTorch implementation of **Boosting Object Detection with Zero-Shot Day-Night Domain Adaptation**. (CVPR 2024) [[Page](https://zpdu.github.io/DAINet_page/) | [Paper](https://arxiv.org/abs/2312.01220)]
![overview](./assets/overview.png)
## π¨ To-Do List
1. - [x] release the code regarding the proposed model and losses.
3. - [x] release the evaluation code, and the pretrained models.
3. - [x] release the training code.
## :rocket: Installation
Begin by cloning the repository and setting up the environment:
```
git clone https://github.com/ZPDu/DAI-Net.git
cd DAI-Net
conda create -y -n dainet python=3.7
conda activate dainet
pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 -f https://download.pytorch.org/whl/torch_stable.html
pip install -r requirements.txt
```
## :notebook_with_decorative_cover: Training
#### Data and Weight Preparation
- Download the WIDER Face Training & Validation images at [WIDER FACE](http://shuoyang1213.me/WIDERFACE/).
- Obtain the annotations of [training set](https://github.com/daooshee/HLA-Face-Code/blob/main/train_code/dataset/wider_face_train.txt) and [validation set](https://github.com/daooshee/HLA-Face-Code/blob/main/train_code/dataset/wider_face_val.txt).
- Download the [pretrained weight](https://drive.google.com/file/d/1MaRK-VZmjBvkm79E1G77vFccb_9GWrfG/view?usp=drive_link) of Retinex Decomposition Net.
- Prepare the [pretrained weight](https://drive.google.com/file/d/1whV71K42YYduOPjTTljBL8CB-Qs4Np6U/view?usp=drive_link) of the base network.
Organize the folders as:
```
.
βββ utils
βββ weights
β βββ decomp.pth
β βββ vgg16_reducedfc.pth
βββ dataset
β βββ wider_face_train.txt
β βββ wider_face_val.txt
β βββ WiderFace
β β βββ WIDER_train
β β βββ WIDER_val
```
#### Model Training
To train the model, run
```
python -m torch.distributed.launch --nproc_per_node=$NUM_OF_GPUS$ train.py
```
## :notebook: Evaluationβ
On Dark Face:
- Download the testing samples from [UG2+ Challenge](https://competitions.codalab.org/competitions/32499).
- Download the checkpoints: [DarkFaceZSDA](https://drive.google.com/file/d/1BdkYLGo7PExJEMFEjh28OeLP4U1Zyx30/view?usp=drive_link) (28.0) or [DarkFaceFS](https://drive.google.com/file/d/1ykiyAaZPl-mQDg_lAclDktAJVi-WqQaC/view?usp=drive_link) (52.9, finetuned with full supervision).
- Set (1) the paths of testing samples & checkpoint, (2) whether to use a multi-scale strategy, and run test.py.
- Submit the results for benchmarking. ([Detailed instructions](https://competitions.codalab.org/competitions/32499)).
On ExDark:
- Our experiments are based on the codebase of [MAET](https://github.com/cuiziteng/ICCV_MAET). You only need to replace the checkpoint with [ours](https://drive.google.com/file/d/1g74-aRdQP0kkUe4OXnRZCHKqNgQILA6r/view?usp=drive_link) for evaluation.
## π Citation
If you find this work useful, please cite
``` citation
@inproceedings{du2024boosting,
title={Boosting Object Detection with Zero-Shot Day-Night Domain Adaptation},
author={Du, Zhipeng and Shi, Miaojing and Deng, Jiankang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={12666--12676},
year={2024}
}
```
or
``` citation
@article{du2023boosting,
title={Boosting Object Detection with Zero-Shot Day-Night Domain Adaptation},
author={Du, Zhipeng and Shi, Miaojing and Deng, Jiankang},
journal={arXiv preprint arXiv:2312.01220},
year={2023}
}
```
## π Acknowledgement
We thank [DSFD.pytorch](https://github.com/yxlijun/DSFD.pytorch), [RetinexNet_PyTorch](https://github.com/aasharma90/RetinexNet_PyTorch), [MAET](https://github.com/cuiziteng/ICCV_MAET), [HLA-Face](https://github.com/daooshee/HLA-Face-Code) for their amazing works!