Open BarY7 opened 1 week ago
We follow the same training strategy from SFDA-DPL on the source domain. However, due to the slight differences between our model and the model from SFDA-DPL (you can refer to the model definition), we can't reuse its source model. Therefore, we reproduce its source domain training procedure. For consistent comparisons, we use the "W/o DA" score from the SFDA-DPL paper.
On Domain1, the initial scores are already very close to the upper bound, so the improvements are marginal.
Thanks for answering @lloongx !
Could you elaborate regarding the source model change (why was it needed?)
I did not see a mention of this in the experiments section of the paper. In the code I see that the decoder part is indeed different in the last conv layer - is that the change you mentioned or is there anything else?
Sorry for late reply.
the decoder part is indeed different in the last conv layer
You are right, it's where I change the model. The implementation in SFDA-DPL is from BEAL and is different from original deeplabv3+, so we restore it to the right version.
Hi @lloongx , thank you for the paper! I have a question regarding the uploaded trained model from Domain3. It seems like the W/o DA performance of this model is too high compared to the results of the paper (especially for Domain1, which closely achieves the results after then method has been applied)
These are my results for Domain1, Domain2 after downloading the dataset and model:
Domain1:
Domain2: