-
Is there a reference for face align used in the code? Is it convenient to provide it? Can this algorithm be used to align bust images? Looking forward to your reply
-
> I want test some other image on your model. But I dont konw how to do it. If I have image sequence with pose data,how to test?
1. Align the images in the way of StyleGAN. You can refer to this sc…
-
We are preparing our Restyle_psp_encoder with the custom dataset.
We have trained our StyleGAN3 network of type StyleGAN3-T (translation equiv.) and then converted the generated .pkl file to a .pt …
rut00 updated
2 years ago
-
https://github.com/facebookresearch/StyleNeRF/blob/03d3800500385fffeaa2df09fca649edb001b0bb/apps/inversion.py#L119
if we set encoder_z=True, the shape of zs output from E is [1,17,512], but the map…
-
Hi authors:
Many thanks for your excellent works. However, I have some questions about W your novel definition as follows:
What is the difference between ?
![image](https://user-images.githubuserc…
-
First of all, Love your work on GAN Inversion and Editing.
Was experimenting with this codebase. I had a question.
We were trying to train this codebase with say a variable input StyleGAN like (10…
-
Hi, after training for close to 3 weeks, using a GeForce Titan RTX, the results were not satisfactory
![resultadosytlegan250000](https://user-images.githubusercontent.com/6282156/168848640-49323f63…
-
Thanks for the great work!
For anyone interested, StyleGAN-nada can now be played on Kaggle notebook with P100 GPU, around 2x-3x speed-up compared to free Colab notebook.
Visit the annotated not…
-
Hello! Thanks for the work done, the results look great. I was particularly impressed by your image inversion, but I am not quite sure how it works. Do you plan to publish the relevant code?
-
Thanks for the interesting work.
I would like to ask how did you get s space? In the paper, it is said that it is obtained by the In-domain gan method, but I read the In-domain gan paper to get W spa…