genforce / idinvert

[ECCV 2020] In-Domain GAN Inversion for Real Image Editing
https://genforce.github.io/idinvert/
MIT License
460 stars 65 forks source link

more training examples for manipulation #5

Closed 250906461 closed 4 years ago

250906461 commented 4 years ago

Dear Genforce group,

Thank you for your great work, I really like your work 'in-domain gan'.

As you state in the paper, in order to do image manipulation (male -> female), you use interfacegan to find the manipulation direction in wp, w or z. Since you use wp which is 14x more dimentions than w, you will need to generate more data to train the linear SVM, right? Interfacegan manipulates on w and train svm with 500k generated images, how many images you need to train svm on wp?

Notice that interfacegan uses standard stylegan, and wp is just broadcasting of w, they have the same intrinsic dimentions, so it can use the same amount of images to train on either w or wp. But in 'in-domain gan', you retrain the stylegan with different w instead of repeated w, so wp has larger intrinsic dimentions than each w.

Looking forward for your reply. Thanks in advance.

ShenYujun commented 4 years ago

Actually, we can use the same number of samples for boundary training. Please refer to this issue. You can still treat the w codes for 14 layers independently. In this way, it is exactly the same as InterFaceGAN. Please see HiGAN for the details of layer-wise manipulation.

250906461 commented 4 years ago

Thx, your answer is very helpful.