-
Hello and sorry, in advance, for the ignorance included in this message.
I do a lot of work in the restoration and colourisation of photographic portraits of historical figures. After discovering a…
-
Hi, thanks for this. What is the image size you tested with? Can we use any images? I am getting errors with 1024 x 768 resolution.
-
I downloaded the corresponding models and placed them in the corresponding folder. I also ran this code on RTX3090, but the following bug appeared without changing any code.
Loading ResNet ArcFace
…
-
I tried to apply my styles found through StyleCLIP with shape `[18,512]` to `codes` variable in psp forward function, but they don't seem to work in hair/age or inversion (after optimization) networks…
-
Thanks to the authors for such influential work.
Equation 5 seems to be incorrect, refer to the equation 7 in the paper "DiffusionCLIP: Text-Guided Diffusion Models for Robust Image Manipulation".
…
-
First of all, I want to thank you for your wonderful project! Another step forward in the StyleGAN Inversion problem.
The reconstruction quality is very good and close to optimization approaches. H…
-
```
Namespace(gpus=4, dataset='carla', xid='', resolution=32, batch_size=8, run_inversion=False, resume_from=None, root_path='.', data_path='../data/nerf', iterations=300000, lr_g=0.0025, lr_d=0.002,…
-
Hi Sergei, thank you for your amazing research and this repo. By following the readme, I was able to generate the CoreML model and I noticed that the generated model fully runs on ANE in my testing wh…
-
Will stylegan inversion encoder be trained? I found that CLIP image encoder and CLIP Text Encoder used detach() to make it untrained. I look forward to your answer. Thank you!
-
Hello
Thank you for nice work
How can I run your algorithm on my model? how to generate latents.pt?