mit-han-lab / data-efficient-gans

[NeurIPS 2020] Differentiable Augmentation for Data-Efficient GAN Training
https://arxiv.org/abs/2006.10738
BSD 2-Clause "Simplified" License
1.28k stars 175 forks source link

Does this work with Pix2Pix? #13

Closed Jaberish closed 4 years ago

Jaberish commented 4 years ago

Does this work with Pix2Pix? I was trying to use it for that but the discriminator was way better than the generator.

zsyzzsoft commented 4 years ago

In my experiments with pix2pix and CycleGAN, I found that the Color DiffAugment may work but the performance gains are very limited. Probably the generator's architecture is too weak to overfit the discriminator.

Kitty-sunray commented 2 years ago

How about pix2pixHD? Spade/GauGAN?

Kitty-sunray commented 2 years ago

@zsyzzsoft, Would you please share pix2pix code so I can try to fit it into the pix2pixHD?

zsyzzsoft commented 2 years ago

@Kitty-sunray Haven't tried. Basically you can apply the diffaugment to every image (in pix2pix this is the concatenated image) before feeding into the discriminator.

Kitty-sunray commented 2 years ago

@zsyzzsoft Haven't tried. Basically you can apply the diffaugment to every image (in pix2pix this is the concatenated image) before feeding into the discriminator.

augmentations should be exactly the same for each image in the concatenated pair, right? Also, there actually 2 pair of pairs been feed to D: first pair is "label->fake", second pair is "label=>real"; If augmentations for "label" and "fake" should be exactly the same, should they also be the same for "label->real" pair? I am currently only saving random to let the augmentations be the same within each pair, but not within pair of pairs. Thanks!