Open WangQinghuCS opened 3 years ago
Assuming you are using a model of 1024 resolution (i.e., with 18 convolutional layers), there will be 18 w
s in total. Then, the boundary is trained by concatenating all w
s together (i.e., 18x512-dimensional).
Assuming you are using a model of 1024 resolution (i.e., with 18 convolutional layers), there will be 18
w
s in total. Then, the boundary is trained by concatenating allw
s together (i.e., 18x512-dimensional).
I want to train boundies for each w in wp, as decribed in your work 《HiGAN - Semantic Hierarchy Emerges in Deep Generative Representations for Scene Synthesis》, Is it also work?
During training, the w
code is repeated before feeding to all layers. Hence, training boundaries for each w
in wp
is exactly the same as training only one boundary on w
. In HiGAN, accordingly, all layers share the same boundary from the W
-space.
Hi, thanks for your work! I recently read your another paper 《In-Domain GAN Inversion for Real Image Editing》. In this paper, you conduct the image manipulation in the wp space. I wonder how to train the boundary in the wp space?