Closed aserdega closed 6 years ago
Hi, The mean and the variance used are the statistics computed for SVHN dataset. We normalize both the source and target domain images using the mean/variance of source domain. In the Discriminator, we re-normalize all images in [-1, 1] range. This is a standard technique used in most GAN training. Also, the last layer of the generator is Tanh which squeezes generations in [-1, 1] range. So, the feature network takes in inputs normalized by source domain mean/var, and the generator network generates images in [-1, 1] range. Hope this explains.
Hello,
Firs of all, thank you for sharing the code and work that you did.
I am playing around with your implementation on digits data set from the link that you have provided (SVHN->MNIST). The thing is that I am a little bit confused with your choice of normalization for F-network inputs.
Please, could you explain why you do normalization in this way and what was your intuition behind this? Is it just result of empirical observation? A am so curious about it. I have tried to remove this norm or change it to (x - 0.5) * 2, however it resulted in lower target domain accuracy.
Looking forward for your reply and thanks in advance!