Open nupurkmr9 opened 3 years ago
Yes. You may need to change them and possibly some other hyperparameters as said in the paper, to fully reproduce our results in the TensorFlow version.
Thanks. Can you tell what other hyperparameters need to be changed for FFHQ with config "paper256"?
mb=32, mbstd=4, lrate=0.002, and enable the mirror augmentation
Hi, In the paper, for FFHQ 1k 256x256 training with DiffAugment, it is written that path length regularization and lazy regularization is disabled. If I am not wrong, in the DiffAugment-stylegan2-pytorch repo lazy regularization and path length regularization is still there, right? Just wanted to confirm this before I start any training. :) Thanks!