Closed FriedRonaldo closed 2 years ago
I tested the given pre-trained model (mit-han-lab:DiffAugment-biggan-cifar10-0.1.pth).
This model also shows the augmentation leakage issue. Is it okay to use this ill-behaved model as the best model to report the FID score?
BigGAN seems to expose some unique patterns when the model is still at an early stage of training when collapsed. Also means that there is still huge space for improvements.
Thanks for the reply. The best FID score of BigGAN + DiffAug with 10% of samples might be 29.xx without the collapse or leakage, then, as you say, there might be much room for improvement.
Hi, thanks for your great work!
I have an issue with the augmentation leakage while training BigGAN on CIFAR-10 (10% case) with DiffAug (translation, cutout).
As shown below, the augmentation seems to be leaked to the generator (especially, cutout operation). I used the same configuration given in the bash file but I used only a single GPU (V100 with 32GB VRAM)
(This is the qualitative results of the best model in terms of FID = 22.53710. As the training goes on, FID gets worse.)
Is there an idea to get the right result? Or just using a single might raise this problem? The result is the same when I use two GPUs.
Thanks!