wasidennis / AdaptSegNet

Learning to Adapt Structured Output Space for Semantic Segmentation, CVPR 2018 (spotlight)
847 stars 203 forks source link

Why performance highly degraded when perform model.eval()? #79

Open qimw opened 4 years ago

qimw commented 4 years ago

I am adapting from light dataset to a darker one. When I am running test on the source domain, I found that the performance is highly degraded if performing model.eval(). But this doesn't appear on the target domain. It is quiet wired. And my pytorch version is 1.0

qimw commented 4 years ago

I found that the bns are set requires_grad = False, but it will still update the running_mean and running_var. So what's the meaning of doing this?

wasidennis commented 4 years ago

Seems that the gamma and beta in batchnorm are still updated (we also found this before), but we cannot control it. For the degraded performance on source, it is natural compared to the model without target domain alignment. However, it should not produce something super bad as there is still a supervised loss on source.

qimw commented 4 years ago

No, the result is super bad. Maybe this is due to the large domain gap. The last iter we will train model on target domain. So the batchnorm parameters(running_mean, running_var) will adapt to the target domain at the same time. When we are testing on the source domain, batchnorm parameters don't match with this domain. As a result, performance will drop.