microsoft / ProDA

Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation (CVPR 2021)
https://arxiv.org/abs/2101.10979
MIT License
286 stars 44 forks source link

Exact command line of warmup stage #42

Open shahaf1313 opened 2 years ago

shahaf1313 commented 2 years ago

Hey :) Can you guys please supply the command line used to train the warmup phase? I'm trying to retrain a network "from the beginning". Thanks! Shahaf

pascal1129 commented 2 years ago

I have similar question.

The provided checkpoint (warm-up model) is 43.3 mIoU, while the reported metric in the paper is 41.6 mIoU, which quite confused me.

panzhang0104 commented 2 years ago

You can try one of below commands to get warmup model:

python train.py --name gta2cityv2_warmupd_ls --model_name deeplabv2 --warm_up --freeze_bn --gan LS --lr 2.5e-4 --adv 0.01 --no_resume

python train.py --name gta2cityv2_warmupd_ls_S --model_name deeplabv2 --warm_up --freeze_bn --gan LS --lr 2.5e-4 --adv 0.01 --no_resume --S_pseudo_src 1

panzhang0104 commented 2 years ago

@pascal1129 In order to make sure the released code can be reproduced, we clean the code and retrain the whole framework. So the released model is from the clean code. Since the warmup stage is trained by adversarial loss, it is unstable. The result of this stage maybe a little bit different but does not influence the result of the final stage.