lhoyer / DAFormer

[CVPR22] Official Implementation of DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation
Other
470 stars 93 forks source link

Difference to DACS #20

Closed wangkaihong closed 2 years ago

wangkaihong commented 2 years ago

Hi,

Thanks for the amazing work! Just wondering what is the difference between the experiment in the 9th row (in which you removed everything and adopted Deeplab V2) in Tab.5 to the original DACS? Is that the ST strategy, i.e., mean teacher? In that case shouldn't it generally work better than DACS reported in Tab. 6 which does not apply this more advanced ST technique?

Thanks!

lhoyer commented 2 years ago

Hi Kaihong,

Thanks for your interest in our work! The used UDA method in row 9 of Table 5 is mostly equivalent to DACS. Specifically, DACS also uses self-training with a mean teacher. However, in our experiments, we use a different optimizer, learning rate, and number of training iterations than DACS, which might explain the performance difference.

Best, Lukas

wangkaihong commented 2 years ago

Terrific, that makes sense to me, thanks for the fast and clear response!