genforce / interfacegan

[CVPR 2020] Interpreting the Latent Space of GANs for Semantic Face Editing
https://genforce.github.io/interfacegan/
MIT License
1.51k stars 281 forks source link

How to train w+ space boundary? #38

Closed taotaoyuhust closed 4 years ago

taotaoyuhust commented 4 years ago

Using the stylegan-encoder project, I got the latent codes as array (n,18,512). However, the training code is for 1d vector input , do i need to separate the latent code into 1d vector?
Thanks a lot!

ShenYujun commented 4 years ago

You have two options: (1) flatten the codes from the shape (n, 18, 512) to the shape (n, 18*512), then use the reshaped code for boundary training and then reshape it back. In this way, you will get only one boundary. (2) Train 18 boundaries for different layers separately. For this option, please refer to HiGAN for more details.

taotaoyuhust commented 4 years ago

Thanks for reply, actually i tried the second method and trained 8 boundaries for the first 8 layers. However ,the performance seems not that good , i'll refer to your HiGAN method, thanks very much!

WJ-Lai commented 4 years ago

Hi, what sample size should I use for training W+ space?

You have two options: (1) flatten the codes from the shape (n, 18, 512) to the shape (n, 18*512), then use the reshaped code for boundary training and then reshape it back. In this way, you will get only one boundary. (2) Train 18 boundaries for different layers separately. For this option, please refer to HiGAN for more details.

In the paper, you use 20K for stylegan. If I use method one, (1,18*512) is 18 times of (1,512), should I used 360K for training?

If I use method two, can I still use 20K for training?

ShenYujun commented 4 years ago

@WJ-Lai (1) 20K is enough. (2) You can use the same 20K samples for all 18 layers.