lhoyer / DAFormer

[CVPR22] Official Implementation of DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation
Other
466 stars 92 forks source link

Performance gap between training DACS with SwinTransformer and SegFormer #13

Closed super233 closed 2 years ago

super233 commented 2 years ago

Hi, sorry to disturb you again. I want to ask the questions about training with SwinTransformer, and the title may not be appropriate.

I have successfully reproduced the UDA result of SegFormer in Table 1, which finally is 58.82 and close to your reported result.

Meanwhile, I did the same experiment with Swin-B, however, the result was worse than SegFromer, where the best performance is 48.1 at 24000 iters (the training was stopped unexpectedly), but the best performance is 53.81 for SegFormer at 24000 iters. The dataset, training, and other parameters are the same with SegFormer, and modification is only the model.

The training log is here: gta2cs_dacs_swin_base_poly10warm_s0.log

Did you do experiments with SwinTransformer before? Why there is a big performance gap? Can you share your points about this?

Looking forward to your reply. :-)

lhoyer commented 2 years ago

Hi, we didn't experiment with a Swin Transformer backbone. Therefore, I cannot provide experimentally-grounded insights to you.

super233 commented 2 years ago

Ok, thanks for your reply.