Pytorch implementation of our method for adapting semantic segmentation from the synthetic dataset (source domain) to the real dataset (target domain). Based on this implementation, our result is ranked 3rd in the VisDA Challenge.
Contact: Yi-Hsuan Tsai (wasidennis at gmail dot com) and Wei-Chih Hung (whung8 at ucmerced dot edu)
Learning to Adapt Structured Output Space for Semantic Segmentation
Yi-Hsuan Tsai*, Wei-Chih Hung*, Samuel Schulter, Kihyuk Sohn, Ming-Hsuan Yang and Manmohan Chandraker
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018 (spotlight) (* indicates equal contribution).
Please cite our paper if you find it useful for your research.
@inproceedings{Tsai_adaptseg_2018,
author = {Y.-H. Tsai and W.-C. Hung and S. Schulter and K. Sohn and M.-H. Yang and M. Chandraker},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
title = {Learning to Adapt Structured Output Space for Semantic Segmentation},
year = {2018}
}
Install PyTorch from http://pytorch.org with Python 2 and CUDA 8.0
NEW Add the LS-GAN objective to improve the performance
--gan LS
option during training (see below for more details)PyTorch 0.4 with Python 3 and CUDA 8.0
pytorch_0.4
folder--tensorboard
in the command--lambda-adv-target1 0.00005 --lambda-adv-target2 0.0005
. We will investigate this issue soon.Clone this repo
git clone https://github.com/wasidennis/AdaptSegNet
cd AdaptSegNet
Download the GTA5 Dataset as the source domain, and put it in the data/GTA5
folder
Download the Cityscapes Dataset as the target domain, and put it in the data/Cityscapes
folder
Please find our-pretrained models using ResNet-101 on three benchmark settings here
They include baselines (without adaptation and with feature adaptation) and our models (single-level and multi-level)
NEW Update results using LS-GAN and using Synscapes as the source domain
Download the pre-trained multi-level GTA5-to-Cityscapes model and put it in the model
folder
Test the model and results will be saved in the result
folder
python evaluate_cityscapes.py --restore-from ./model/GTA2Cityscapes_multi-ed35151c.pth
python evaluate_cityscapes.py --model DeeplabVGG --restore-from ./model/GTA2Cityscapes_vgg-ac4ac9f6.pth
python compute_iou.py ./data/Cityscapes/data/gtFine/val result/cityscapes
python train_gta2cityscapes_multi.py --snapshot-dir ./snapshots/GTA2Cityscapes_single_lsgan \
--lambda-seg 0.0 \
--lambda-adv-target1 0.0 --lambda-adv-target2 0.01 \
--gan LS
python train_gta2cityscapes_multi.py --snapshot-dir ./snapshots/GTA2Cityscapes_multi \
--lambda-seg 0.1 \
--lambda-adv-target1 0.0002 --lambda-adv-target2 0.001
python train_gta2cityscapes_multi.py --snapshot-dir ./snapshots/GTA2Cityscapes_single \
--lambda-seg 0.0 \
--lambda-adv-target1 0.0 --lambda-adv-target2 0.001
This code is heavily borrowed from Pytorch-Deeplab.
The model and code are available for non-commercial research purposes only.