Multi-Anchor Active Domain Adaptation for Semantic Segmentation
Munan Ning, Donghuan Lu, Dong Wei†, Cheng Bian, Chenglang Yuan, Shuang Yu, Kai Ma, Yefeng Zheng
[Paper] [PPT] [Graphic Abstract]
This respository contains the MADA method as described in the ICCV 2021 Oral paper "Multi-Anchor Active Domain Adaptation for Semantic Segmentation".
The code requires Pytorch >= 0.4.1 with python 3.6. The code is trained using a NVIDIA Tesla V100 with 32 GB memory. You can simply reduce the batch size in stage 2 to run on a smaller memory.
Preparation:
Setup the config files.
Training-quick
python3 train_active_stage1.py
python3 train_active_stage2.py
Evaluation
python3 test.py
to see the results.
Training-whole process
python3 save_feat_source.py
python3 cluster_anchors_source.py
python3 select_active_samples.py
python3 train_active_stage1.py
python3 save_feat_target.py
python3 cluster_anchors_target.py
python3 train_active_stage2.py
The code is heavily borrowed from the CAG_UDA (https://github.com/RogerZhangzz/CAG_UDA).
If you use this code and find it usefule, please cite:
@inproceedings{ning2021multi,
title={Multi-Anchor Active Domain Adaptation for Semantic Segmentation},
author={Ning, Munan and Lu, Donghuan and Wei, Dong and Bian, Cheng and Yuan, Chenglang and Yu, Shuang and Ma, Kai and Zheng, Yefeng},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={9112--9122},
year={2021}
}
The anchors are calcuated based on features captured by decoders.
In this paper, we utilize the more powerful decoder in DeeplabV3+, it may cause somewhere unfair. So we strongly recommend the ProDA which utilize origin DeeplabV2 decoder.