lhoyer / DAFormer

[CVPR22] Official Implementation of DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation
Other
470 stars 93 forks source link

MiT-B3 is much better than MiT-B4 #47

Closed 5as4as closed 1 year ago

5as4as commented 2 years ago

Dear authors, thank you for your outstanding work. I have encountered results in reproducing your work that make me puzzled: in the GTAV->Cityscapes experiment, when the backbone network is MiT-B5, I get results similar to the paper (68.3); when the backbone network is MiT-B4, I get an mIoU of 66.69; when the backbone network is MiT-B3, I get an mIoU of 67.91. I am confused why MiT-B3 is so much better than MiT-B4. Have you conducted similar experiments? What are the results like?

lhoyer commented 2 years ago

Dear @5as4as,

Thank you for your interest in our work. We conducted some experiments with smaller MiT backbones with the baseline UDA (without RCS and FD) as shown in Tab. 3. However, we did not try smaller MiT backbones with RCS, FD, and the proposed decoder. I agree that your results are somewhat unexpected. It shows that RCS, FD, and the proposed decoder give an even greater improvement for the smaller backbone when compared to Tab. 3. However, I don't know why the results for MiT-B3 are better than MiT-B4. Have you run the experiment on multiple seeds to exclude random fluctuations in the performance?

Best, Lukas

5as4as commented 2 years ago

Dear @lhoyer, Thank you for your reply. I see that you have set the random seed to 0 in your config file, so I have not tried multiple seeds. However, the results are the average of two experiments, which should be reliable to some extent.