NVlabs / stylegan2-ada-pytorch

StyleGAN2-ADA - Official PyTorch implementation
https://arxiv.org/abs/2006.06676
Other
4.09k stars 1.16k forks source link

Rule of thumb for kimg? #274

Open lebeli opened 1 year ago

lebeli commented 1 year ago

I have a dataset with ~4000 images (CAD images without any noise). Is there a rule of thumb for choosing the kimg hyperparameter? Also, the kimg hyperparameter is basically how one can controll the number of epochs, right?

thinkercache commented 1 year ago

General understanding is : 1kimg =1000inmg. This means 1000 images are shown to the network during the training. According to my experience, I would suggest 4000kimg (--kimg=4000) in the training configuration, as a good starting point to observe how the G and D behaves. After that, you may go for lesser kimg or higher, depends on your dataset. To satisfy the built-in plugins in PyTorch, kimg is used.

lebeli commented 1 year ago

Thank you, for the explanation. What's a good metric to observe both the generator and discriminator? FID for the generator and logits for the discriminator? Or simply logits for both?

thinkercache commented 1 year ago

@lebeli KID is good for small datasets as the original KID paper (Demystifying MMD GANs Mikołaj Bińkowski, Danica J. Sutherland, Michael Arbel, Arthur Gretton https://arxiv.org/abs/1801.01401) suggests. FID is widely used and good fit for large datasets such as over 10k images, according to my understanding at the moment. One metric for both.