Implementation of our paper Progressive Domain Adaptation for Object Detection, based on pytorch-faster-rcnn and PyTorch-CycleGAN.
Progressive Domain Adaptation for Object Detection Han-Kai Hsu, Chun-Han Yao, Yi-Hsuan Tsai, Wei-Chih Hung, Hung-Yu Tseng, Maneesh Singh and Ming-Hsuan Yang IEEE Winter Conference on Applications of Computer Vision (WACV), 2020.
Please cite our paper if you find it useful for your research.
@inproceedings{hsu2020progressivedet,
author = {Han-Kai Hsu and Chun-Han Yao and Yi-Hsuan Tsai and Wei-Chih Hung and Hung-Yu Tseng and Maneesh Singh and Ming-Hsuan Yang},
booktitle = {IEEE Winter Conference on Applications of Computer Vision (WACV)},
title = {Progressive Domain Adaptation for Object Detection},
year = {2020}
}
This code is tested with Pytorch 0.4.1 and CUDA 9.0
# Pytorch via pip: Download and install Pytorch 0.4.1 wheel for CUDA 9.0
# from https://download.pytorch.org/whl/cu90/torch_stable.html
# Pytorch via conda:
conda install pytorch=0.4.1 cuda90 -c pytorch
# Other dependencies:
pip install -r requirements.txt
sh ./lib/make.sh
data/KITTI/
data/CityScapes/
data/CityScapes/leftImg8bit/
as foggytrain
and foggyval
.data/bdd100k/
Generate the synthetic data with the PyTorch-CycleGAN implementation.
git clone https://github.com/aitorzip/PyTorch-CycleGAN
Import the dataset loader code in ./cycleGAN_dataset_loader/
to train/test the CycleGAN on corresponding image translation task.
Follow the testing instructions on PyTorch-CycleGAN and download the weight below to generate synthetic images. (Remember to change to the corresponding output image size)
data/KITTI/training/synthCity_image_2/
with same naming and folder structure as original KITTI data.data/CityScapes/leftImg8bit/synthFoggytrain
with same naming and folder structure as original Cityscapes data.data/CityScapes/leftImg8bit/synthBDDdaytrain
and data/CityScapes/leftImg8bit/synthBDDdayval
with same naming and folder structure as original Cityscapes data.Please follow the training instructions on PyTorch-CycleGAN.
Download the following adapted weights to ./trained_weights/adapt_weight/
./experiments/scripts/test_adapt_faster_rcnn_stage1.sh [GPU_ID] [Adapt_mode] vgg16
# Specify the GPU_ID you want to use
# Adapt_mode selection:
# 'K2C': KITTI->Cityscapes
# 'C2F': Cityscapes->Foggy Cityscapes
# 'C2BDD': Cityscapes->BDD100k_day
# Example:
./experiments/scripts/test_adapt_faster_rcnn_stage2.sh 0 K2C vgg16
./experiments/scripts/train_adapt_faster_rcnn_stage1.sh [GPU_ID] [Adapt_mode] vgg16
# Specify the GPU_ID you want to use
# Adapt_mode selection:
# 'K2C': KITTI->Cityscapes
# 'C2F': Cityscapes->Foggy Cityscapes
# 'C2BDD': Cityscapes->BDD100k_day
# Example:
./experiments/scripts/train_adapt_faster_rcnn_stage1.sh 0 K2C vgg16
Download the following pretrained detector weights to ./trained_weights/pretrained_detector/
./experiments/scripts/train_adapt_faster_rcnn_stage2.sh 0 K2C vgg16
Discriminator score files:
Extract the pretrained CycleGAN discriminator scores to ./trained_weights/
or
Save a dictionary of CycleGAN discriminator scores with image name as key and score as value
Ex: {'jena_000074_000019_leftImg8bit.png': 0.64}
Thanks to the awesome implementations from pytorch-faster-rcnn and PyTorch-CycleGAN.