May I ask you some questions about your own stylegan generator 256*256?
I found that your own generator's last layer of the MLP(mapping) is different from the original stylegan. You don't repeat the w code (1, 512) 14 times, and you directly learn the wp code with shape (14, 512). Will this improve the model? It seems that it makes image manipulating better?
When you train the boundary for your wp code with shape (14, 512), do you apply the function of np.linalg.norm() for every (1, 512), or directly for (14, 512)? It looks like the former.
How do you find manipulate_layers?
Thank you again for Genforce's kind share! It's really cool !!!!
Hi! Thanks for the great work!
May I ask you some questions about your own stylegan generator 256*256?
Thank you again for Genforce's kind share! It's really cool !!!!