jychoi118 / P2-weighting

CVPR 2022
MIT License
143 stars 14 forks source link

About training on FFHQ #4

Closed Suimingzhe closed 1 year ago

Suimingzhe commented 1 year ago

Thanks for your work.

I have two questions for training p2-weighting on FFHQ dataset.

  1. Is it correct to set p2_gamma = 1 and p2_k = 1 for training face dataset (FFHQ) ?
  2. I notice you choose a lightweight unet based on ADM. Do you think using a larger-scale unet will further improve the quality of generated fake face images? If so, which hyperparameters should I modify?

Thanks again and look forward to your reply.

Kind Regards

jychoi118 commented 1 year ago
  1. We used p2_gamma = 0.5 and p2_k = 1 for FFHQ. However, it is okay to use p2_gamma = 1.
  2. Self-attention seems to be effective in datasets (FFHQ) where global consistency is important. You may try self-attention at multiple resolutions: --attention_resolutions 8,16,32
Suimingzhe commented 1 year ago
  1. We used p2_gamma = 0.5 and p2_k = 1 for FFHQ. However, it is okay to use p2_gamma = 1.
  2. Self-attention seems to be effective in datasets (FFHQ) where global consistency is important. You may try self-attention at multiple resolutions: --attention_resolutions 8,16,32

Thanks for your advice. I will try it as you said.

Kind Regards