Closed leeisack closed 2 years ago
We use e4e :) you can see our README for further instructions
Ah, sorry I didn't read the readme and asked the question after looking at the colab code. Can you tell me what line the e4e code is on in colab? I can't find it.
Our colab uses images that were already encoded using e4e :) you can find the latents under dirs/w_plus.npy so our colab does not do the actual encoding. I am planning on expanding our colab to support inversion of your own image instead of uploading the inverted latent, so if that's what you were looking for stay tuned :) Until then, you can invert your own image with e4e, save the latent, and load it in our notebook to the dirs folder (see the example with my latent).
here's a link to the notebook that uses e4e to encode and then applies our directions: https://colab.research.google.com/github/hila-chefer/TargetCLIP/blob/main/TargetCLIP%2Be4e.ipynb
Thanks for replying. Thanks a lot.
Is there any advantage to using e4e over "restyle encoder"?, restyle encoder inverts the source images with more precision.
I wonder how to encode target or source image in latent space. I usually use e4e, image2styleGAN, which one did you use?