mit-han-lab / data-efficient-gans

[NeurIPS 2020] Differentiable Augmentation for Data-Efficient GAN Training
https://arxiv.org/abs/2006.10738
BSD 2-Clause "Simplified" License
1.27k stars 175 forks source link

Disable path regularization and lazy regularization #57

Open nupurkmr9 opened 3 years ago

nupurkmr9 commented 3 years ago

Hi, In the paper, for FFHQ 1k 256x256 training with DiffAugment, it is written that path length regularization and lazy regularization is disabled. If I am not wrong, in the DiffAugment-stylegan2-pytorch repo lazy regularization and path length regularization is still there, right? Just wanted to confirm this before I start any training. :) Thanks!

zsyzzsoft commented 3 years ago

Yes. You may need to change them and possibly some other hyperparameters as said in the paper, to fully reproduce our results in the TensorFlow version.

nupurkmr9 commented 3 years ago

Thanks. Can you tell what other hyperparameters need to be changed for FFHQ with config "paper256"?

zsyzzsoft commented 3 years ago

mb=32, mbstd=4, lrate=0.002, and enable the mirror augmentation