genforce / sefa

[CVPR 2021] Closed-Form Factorization of Latent Semantics in GANs
https://genforce.github.io/sefa/
MIT License
964 stars 108 forks source link

Question about factorize_weight() #7

Closed junikkoma closed 3 years ago

junikkoma commented 3 years ago

Hi, I first like to thank you for your great research, and generous release of code. I'm using your approach on a research using styleGAN. while looking at your code, I realized that on concatenating weights to get A from the paper, last layer is from output layer(like synthesis.output7.style.weight), unlike others which are from style weight layers (like synthesis.layer14.style.weight)

I was wondering why last output layer was included in constructing A, as some other unofficial implements which do not include output layer seems to work as well. If there is specific reason why output layer was included, I'd like to know.

(I understand such curiosity may have come from my lack of knowledge of GAN structure. On such case, I would be also grateful even if you point it out )

Thank you for your kind response!

ShenYujun commented 3 years ago

First, there is no synthesis.layer14.style.weight in our StyleGAN model and the last convolutional layer should be layer13.

Second, output7 is used for StyleGAN2. Please look into this file and this file to check the difference between the implementations between StyleGAN and StyleGAN2.

Third, the last layer would not affect the factorization result that much. That is why not including the output layer would give similar results.