genforce / idinvert

[ECCV 2020] In-Domain GAN Inversion for Real Image Editing
https://genforce.github.io/idinvert/
MIT License
461 stars 65 forks source link

Encoder train on crawled face or FFHQ #42

Closed PXThanhLam closed 3 years ago

PXThanhLam commented 3 years ago

Thanks for your great work. I have run your repo and notice that, the inverted result on your example and image generate by pretrain stylegan generator is good, but when running on random image from the internet, the result is really bad. Is your encoder train on FFHQ or some random image from internet. In addition, the example image in your repo is random or take from your training set on encoder network. Thanks you.

zhujiapeng commented 3 years ago

Both the generator and encoder were trained using FFHQ datasets, which have already been aligned. Thus, if you want to invert the image from the Web, make sure that you have aligned the image first.

PXThanhLam commented 3 years ago

Thanks for your answer, I will align the image before feeding into network. Ps: In the perceptual model, did you try to calculate l2 loss in multi layer, instead of layer 22 only ( I calculate l2 loss in multi layer and notice that the result is much better )

zhujiapeng commented 3 years ago

We don't try it on multi-layers. Better reconstruction? or better manipulation? or something else?

PXThanhLam commented 3 years ago

Better Reconstruction, I notice that the mse loss between two image is much smaller when doing perceptual loss in multi-layers, and the reconstruct image look more "realistic". Here is some reconstruct image with and without multi-layer perceptual. ( with 1000 iteration) Original Image: dicaprio_ori Reconstruct with multi-layer perceptual loss: dicaprio_multi Reconstruct with original perceptual loss: dicaprio_single Ps : I have try to algin and the result is the same (Like what you see in example above), I currently try to expand your work with styleganv2, will inform you when the work is finish.

ShenYujun commented 3 years ago

Thanks for pointing this out. We will test it following your suggestion.