Closed Suimingzhe closed 1 year ago
--attention_resolutions 8,16,32
- We used p2_gamma = 0.5 and p2_k = 1 for FFHQ. However, it is okay to use p2_gamma = 1.
- Self-attention seems to be effective in datasets (FFHQ) where global consistency is important. You may try self-attention at multiple resolutions:
--attention_resolutions 8,16,32
Thanks for your advice. I will try it as you said.
Kind Regards
Thanks for your work.
I have two questions for training p2-weighting on FFHQ dataset.
Thanks again and look forward to your reply.
Kind Regards