Closed pfeatherstone closed 3 years ago
Apparently that can cause the discriminator to converge too quickly and you don't reach equilibrium. So probs not a good idea. I think i've answered my own question. I'll keep this open in case someone comes up with some other good arguments.
Hi there! I'm now devoting to a similar problem. If one concat them and there are batchnorm in D, It's easy to fail to function as a traditional GAN. However ,for some task that don't need batchnorm, this doesn't have any side effect. And now I just change all BN to GN.
您好,您的来信我已收到,感谢来信,祝天天好心情。
您好,您的邮件小皮已经收到啦!!!
@eriklindernoren when training the discriminator, can you concatenate the real images and fake images along the batch dimension, same for the labels, then shuffle the batch dimensions before passing through the discriminator network. Or do you have to infer the real images then the fake images separately?