mit-han-lab / data-efficient-gans

[NeurIPS 2020] Differentiable Augmentation for Data-Efficient GAN Training
https://arxiv.org/abs/2006.10738
BSD 2-Clause "Simplified" License
1.27k stars 175 forks source link

Regarding implementation of DiffAugment with BiGAN #73

Open shreejalt opened 3 years ago

shreejalt commented 3 years ago

Hi, Thanks a lot for your work. I just wanted to ask that as in paper, Diff Augment is only done in Discriminator. So if we want to try with let's say BiGAN in which encoder is also involved. So do I need to have a DiffAugment in Encoder also if I train BiGAN with DiffAugment?

zsyzzsoft commented 3 years ago

To our purpose, i.e. to reduce discriminator over fitting, augmenting only the discriminator would be enough. I think it's also totally fine to augment the encoder, but serving as a different purpose, if you want to improve its generalizability to unseen data like classifier training.

shreejalt commented 3 years ago

@zsyzzsoft Exactly. For representation learning augmentation might help in the architectures like BiGANs. Because to my knowledge, Encoder were able to learn the features through normal discriminator training using RandomScaling, Cropping. But if we add Diff Augment at the encoder part, I think It may improve results, especially in the downstream tasks like Object Detection, where the GANs are not explored much

Let me try the DiffAugment on BiGAN.

Thanks a lot.

shreejalt commented 3 years ago

Hi @zsyzzsoft I had one doubt regarding the implementation of DiffAugment with BiGAN if you can help me out.

When we pass the real image through encoder and discriminator, it should be the same even after applying the augmentation correct?

Then, how can we pass the same image through both encoder and discriminator, if there are some RandomHorizontal Filps present in the transformations? So there might be the case in which the augmentation seen by the encoder during forward pass might not be the same as seen by the discriminator.

So, if possible can you help me out with this?

zsyzzsoft commented 3 years ago

I think the encoder augmentation and the discriminator augmentation are independent of each other. Maybe you can apply a completely different set of transformations to the encoder and they do not have to be differentiable.

shreejalt commented 3 years ago

Hi, Thanks. I did the implementation of the diff aug in BiGAN like architecture. But I observed a generation leak in the Generator network. Many artifacts are coming in the generated as well as reconstructed image.

I was training on COCO Dataset, just to see the effectiveness on the varied data and not on Object Centric datasets like CelebA/ImageNet. I trained for 88k iterations as of now, and around 50k dataset size.

I am doing the diff augs at 0.5 probability. Also cutout, translation and color is being used.

zsyzzsoft commented 3 years ago

Did you apply DiffAugment to both generated and real images? Can you share a piece of the code?