Closed wangkaihong closed 2 years ago
Hi Kaihong,
Thanks for your interest in our work! The used UDA method in row 9 of Table 5 is mostly equivalent to DACS. Specifically, DACS also uses self-training with a mean teacher. However, in our experiments, we use a different optimizer, learning rate, and number of training iterations than DACS, which might explain the performance difference.
Best, Lukas
Terrific, that makes sense to me, thanks for the fast and clear response!
Hi,
Thanks for the amazing work! Just wondering what is the difference between the experiment in the 9th row (in which you removed everything and adopted Deeplab V2) in Tab.5 to the original DACS? Is that the ST strategy, i.e., mean teacher? In that case shouldn't it generally work better than DACS reported in Tab. 6 which does not apply this more advanced ST technique?
Thanks!