Closed di-mi-ta closed 3 years ago
For the style-based generator (which will feed the latent code to all convolutional layers), SeFa supports analyzing the code for each layer independently. But, indeed, we only focus on the first mapping layer (fully-connected). Exploring the function of convolutional layers as well as considering the non-linear activation is worth exploring in the future. Thanks for the suggestion.
Hi,
I have a question that whether we can apply this method for any layers of generator? Do you try any experiments to explore it? In the paper, you only show theoretical proof on the mapping from the latent code to the first linear layer of the generator, but ignoring some following non-linear and linear layers.
Thanks!