-
Hi,
First, I would like to thank you for sharing your code for your awesome work.
I'm trying to reproduce the results, but I get really weird results, especially for StyleGAN FFHQ. For example:
![…
-
https://github.com/yuval-alaluf/restyle-encoder
MIT
-
![image](https://user-images.githubusercontent.com/38728358/136728331-4e625378-ef2a-44f3-b64a-67e7876622bf.png)
As shown in above img,the swaped face in the top looks bad,and the bottom is good.
Did…
-
I trained a model for ffhq_encode but the performance is bad on some scenes.
The background is difficult to learn.So what should i do to improve the performance?My training data is 5000 pictures.Shou…
-
Hi sir,
I recently have read a paper called [StyleSpace](https://arxiv.org/abs/2011.12799).
1) Can we apply the GANs inversion technique to invert the image into latent space of StyleSpace? It see…
-
Hi!
How can I input an image and manipulate its eye gaze? This repo is only for random noise?
-
Hello. Im pretty glad to you for your code (I even trained my own model with it), but I don't find where u state that you use a different from the original article noise model. Currently I crossbreed …
-
Thanks for your great works. I have some issues about the input parameter Pt of PGGAN. Whether we need to roughly update the input parameter Pt through minimizing per-pixel Manhattan distance between …
-
Hi, great work!
I wonder if the inverse rendering(GAN Inversion) script is currently available, and how to use it?
-
I feel very interested in your work. I'd like to know what GPUs you were using and how long does it take to train.