Vibashan VS, Poojan Oza, Vishal M Patel
[Project Page
] [arXiv
] [pdf
] [Slides
] [BibTeX
]
conda create -n irg_sfda python=3.6
Conda activate irg_sfda
conda install pytorch==1.9.0 torchvision==0.10.0 torchaudio==0.9.0 cudatoolkit=10.2 -c pytorch
cd irg-sfda
pip install -r requirements.txt
## Make sure you have GCC and G++ version <=8.0
cd ..
python -m pip install -e irg-sfda
Download all the dataset into "./dataset" folder. The codes are written to fit for the format of PASCAL_VOC. For example, the dataset Sim10k is stored as follows.
$ cd ./dataset/Sim10k/VOC2012/
$ ls
Annotations ImageSets JPEGImages
$ cat ImageSets/Main/val.txt
3384827.jpg
3384828.jpg
3384829.jpg
.
.
CUDA_VISIBLE_DEVICES=$GPU_ID python tools/train_st_sfda_net.py \
--config-file configs/sfda/sfda_foggy.yaml --model-dir ./source_model/cityscape_baseline/model_final.pth
CUDA_VISIBLE_DEVICES=$GPU_ID python tools/plain_test_net.py --eval-only \
--config-file configs/sfda/foggy_baseline.yaml --model-dir $PATH TO CHECKPOINT
If you found IRG SFDA useful in your research, please consider starring ⭐ us on GitHub and citing 📚 us in your research!
@inproceedings{vs2023instance,
title={Instance relation graph guided source-free domain adaptive object detection},
author={VS, Vibashan and Oza, Poojan and Patel, Vishal M},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={3520--3530},
year={2023}
}
We thank the developers and authors of Detectron for releasing their helpful codebases.