genforce / sefa

[CVPR 2021] Closed-Form Factorization of Latent Semantics in GANs
https://genforce.github.io/sefa/
MIT License
964 stars 108 forks source link

A question about the assumption of the method #11

Closed di-mi-ta closed 3 years ago

di-mi-ta commented 3 years ago

Hi,

I have a question that whether we can apply this method for any layers of generator? Do you try any experiments to explore it? In the paper, you only show theoretical proof on the mapping from the latent code to the first linear layer of the generator, but ignoring some following non-linear and linear layers.

Thanks!

ShenYujun commented 3 years ago

For the style-based generator (which will feed the latent code to all convolutional layers), SeFa supports analyzing the code for each layer independently. But, indeed, we only focus on the first mapping layer (fully-connected). Exploring the function of convolutional layers as well as considering the non-linear activation is worth exploring in the future. Thanks for the suggestion.