hila-chefer / TargetCLIP

[ECCV 2022] Official PyTorch implementation of the paper Image-Based CLIP-Guided Essence Transfer.
232 stars 27 forks source link

encoding method #3

Closed leeisack closed 2 years ago

leeisack commented 2 years ago

I wonder how to encode target or source image in latent space. I usually use e4e, image2styleGAN, which one did you use?

hila-chefer commented 2 years ago

We use e4e :) you can see our README for further instructions

leeisack commented 2 years ago

Ah, sorry I didn't read the readme and asked the question after looking at the colab code. Can you tell me what line the e4e code is on in colab? I can't find it.

hila-chefer commented 2 years ago

Our colab uses images that were already encoded using e4e :) you can find the latents under dirs/w_plus.npy so our colab does not do the actual encoding. I am planning on expanding our colab to support inversion of your own image instead of uploading the inverted latent, so if that's what you were looking for stay tuned :) Until then, you can invert your own image with e4e, save the latent, and load it in our notebook to the dirs folder (see the example with my latent).

hila-chefer commented 2 years ago

here's a link to the notebook that uses e4e to encode and then applies our directions: https://colab.research.google.com/github/hila-chefer/TargetCLIP/blob/main/TargetCLIP%2Be4e.ipynb

leeisack commented 2 years ago

Thanks for replying. Thanks a lot.

loboere commented 2 years ago

Is there any advantage to using e4e over "restyle encoder"?, restyle encoder inverts the source images with more precision.