Closed jianlong-yuan closed 4 years ago
By the way, i test at iter/miou 20000/41.48 25000/43.32 75000/34.34
@jianlong-yuan
Thanks for the question.
I have to admit that segmentation adaptation has large variance. Although Memory Regularization could provide generally better results, the performance could be different.
In addtion, I also can not reproduce your performance for stage2. I use your model of stage1(45.46 miou) to generate pseudo label, then i just run your commd python train_ft.py --snapshot-dir ./snapshots/1280x640_restore_ft_GN_batchsize9_512x256_pp_ms_me0_classbalance7_kl0_lr1_drop0.2_seg0.5_BN_80_255_0.8_Noaug --restore-from ./snapshots/SE_GN_batchsize2_1024x512_pp_ms_me0_classbalance7_kl0.1_lr2_drop0.1_seg0.5/GTA5_25000.pth --drop 0.2 --warm-up 5000 --batch-size 9 --learning-rate 1e-4 --crop-size 512,256 --lambda-seg 0.5 --lambda-adv-target1 0 --lambda-adv-target2 0 --lambda-me-target 0 --lambda-kl-target 0 --norm-style gn --class-balance --only-hard-label 80 --max-value 7 --gpu-ids 0,1,2 --often-balance --use-se --input-size 1280,640 --train_bn --autoaug False
But performance is poor
You may test the model of different iterations. I usually use the model of 25000-th iteration or 50000-th iteration. Due to the evaluation metric of mean class accuracy, the performance could be affect by the rare classes, such as train/bike.
In my practice, the Stage-II model usually achieves around 49~50 mIoU.
For the stage 1, i cheack the bn, i found bn is not trainable. So, Is there any thing different from one GPU?
Hi @jianlong-yuan Recently, I re-run my code with different dropout rates. It all achieves about 50% mIoU.
The code runs on 3 GPUs.
For the stage 2: Load your pretrained model. cityscapes1280x640_restore_ft_GN_batchsize9_512x256_pp_ms_me0_classbalance7_kl0_lr1_drop0.2_seg0.5_BN_80_255_0.8_Noaug I found drop is different from yours, but readme is 0.2. I test all of models with droprate 0.2. I think the result is close to yours. 10000 48.74 15000 48.75 20000 49.68 25000 48.95 30000 49.79 35000 48.52 40000 49.72 45000 49.02 50000 48.52
For stage 1 cityscapesSE_GN_batchsize2_1024x512_pp_ms_me0_classbalance7_kl0.1_lr2_drop0.1_seg0.5 I use droprate 0.1. Is it different from yours? I test all of models 10000 34.45 15000 36.33 20000 41.48 25000 43.32 30000 41.33 35000 40.07 40000 40.15 45000 39.93 50000 39.39 55000 36.24 60000 37.57 65000 34.91 70000 33.57 75000 34.34 80000 33.48 85000 33.52 90000 32.0 100000 32.08
By the way, I found your baseline model is different from others. Your baseline model adds se and GN. Have you compared these difference?
@jianlong-yuan
Thank you, i try again with drop 0.3, i got 45.2 at 2w iters. It is almost same as yours.
if i_iter < 15000: self.lambda_kl_target_copy = 0 self.lambda_me_target_copy = 0 else: self.lambda_kl_target_copy = self.lambda_kl_target self.lambda_me_target_copy = self.lambda_me_target
I found that you didn't use the loss at the beginning, but you started to use the loss after a period of time. But I didn't find a reasonable explanation in paper. Could you explain why you did this? Thank you.
Hi @jianlong-yuan Sorry for the late response. I was preparing the rebuttal at that time...So I missed your message. Yes. It is a small trick. The prediction of the main classifier and the auxiliary classifier is not stable at the beginning. Therefore, I enable the loss in the middle epoch of the training.
I just run same setting as your code, but i only got 43.32 miou, compared with your 45.46.