-
Hello again. Im trying your code (except I've chosed lucidrains GAN) to invert fingerprints (toy project, my first working GAN, publically available data). GAN works nice, but when trying your code Im…
-
Hi. Thanks for your great work!
Is it possible to obtain other directions(about face)?
You said you used StyleFlow. But StyleFlow needs latent codes and facial attributes(MS-API).
And it uses W-l…
-
I tried it with my pictures but failed all, and it returns: ValueError: operands could not be broadcast together with shapes (1,1,0) (1,1,512) (1,1,0), is my picture different from demo images? My 2 c…
-
Hi, sorry for asking a naive question I am still learning GAN. Can this project used for garment virtual try-on? or pose transfer? Projects like [vogue](https://vogue-try-on.github.io/ ) feeds the pos…
-
Hello, I had a doubt. How do we generate images directly from text descriptions. I executed the invert_v2.py code and it seems that it manipulates an input image.
-
I observed that in many images the change between the last image and the one before is significant (sometimes better or worse as far as the human eye. This is for the FFHQ restylepSp pretrained. the t…
-
inversion doesn't look like the face of img source,How can I make it look more like img source?
-
Hi there!
So I've managed to train a model on my own dataset which is starting to look very good. There are still some details that I'd like to improve if possible. For context, I am attempting to…
-
Hi!
According to your paper, it takes about a second to invert an image to its latent representation: "...and another 0.8 − 1 s for the inversion process". However, in your current implementation i…
-
Hi, I wonder if you can help me.
Basically I'd like to train a model similar to the Toonify model, except on a different target domain (I'm going for a more hand-drawn cartoony style) with unpaire…