genforce / interfacegan

[CVPR 2020] Interpreting the Latent Space of GANs for Semantic Face Editing
https://genforce.github.io/interfacegan/
MIT License
1.51k stars 281 forks source link

How to train boundary in wp space? #74

Open WangQinghuCS opened 3 years ago

WangQinghuCS commented 3 years ago

Hi, thanks for your work! I recently read your another paper 《In-Domain GAN Inversion for Real Image Editing》. In this paper, you conduct the image manipulation in the wp space. I wonder how to train the boundary in the wp space?

ShenYujun commented 3 years ago

Assuming you are using a model of 1024 resolution (i.e., with 18 convolutional layers), there will be 18 ws in total. Then, the boundary is trained by concatenating all ws together (i.e., 18x512-dimensional).

WangQinghuCS commented 3 years ago

Assuming you are using a model of 1024 resolution (i.e., with 18 convolutional layers), there will be 18 ws in total. Then, the boundary is trained by concatenating all ws together (i.e., 18x512-dimensional).

I want to train boundies for each w in wp, as decribed in your work 《HiGAN - Semantic Hierarchy Emerges in Deep Generative Representations for Scene Synthesis》, Is it also work?

ShenYujun commented 3 years ago

During training, the w code is repeated before feeding to all layers. Hence, training boundaries for each w in wp is exactly the same as training only one boundary on w. In HiGAN, accordingly, all layers share the same boundary from the W-space.