google-research / uda

Unsupervised Data Augmentation (UDA)
https://arxiv.org/abs/1904.12848
Apache License 2.0
2.17k stars 313 forks source link

BatchNorm in WRN #98

Open bkj opened 3 years ago

bkj commented 3 years ago

Hi --

I noticed that the last BatchNorm in WRN is always set to is_training: https://github.com/google-research/uda/blob/master/image/randaugment/wrn.py#L117

is_training changes for all of the other BNs. Is this intentional? Does it give some kind of performance advantage?

Thanks!

bkj commented 3 years ago

Digging into this deeper -- seems like using batch statistics vs running statistics make a fair bit of difference in the convergence of the model. Do you have a good explanation for that? Seems interesting + surprising to me.