-
Results from CGD appear to better than those from vqganclip. Investigate and implement!
Check out this colab from Crowson:
https://colab.research.google.com/drive/1mpkrhOjoyzPeSWy2r7T8EYRaU7amYOOi…
-
Katherine has released a better notebook for the CLIP-guided-diffusion. Outputs on a P100 is quite slow; but results can be very good. I've put the new notebook in my current repo as the "HQ" version.…
-
**Is your feature request related to a problem? Please describe.**
It would be great to add support for using [CLIP](https://huggingface.co/models?search=clip+laion) in WebUI. It seems to have [bette…
-
Hi thank you for your sharing! I encounter an error related to the autograd() when I run the Clip_guided.py demo. Can you help me check with it? Thanks a lot!
Traceback (most recent call last):
…
-
Hello,
I have finished installing CLIP-Guided-Diffusion, but when I run it, this error happens:
Device: cpu
Size: 256
Setting up [LPIPS] perceptual loss: trunk [vgg], v[0.1], spatial [off]
Loadi…
DC-19 updated
2 years ago
-
Dear @ouhenio
Thanks for your work. I am trying to implement your repo but there is some problem like when i run`pip install -e ./CLIP & pip install -e ./guided-diffusion is not working.`
**Tra…
-
Hello all,
This is fantastic work. Any examples on conditional sampling with classifier guidance?
Say e.g. -> https://github.com/crowsonkb/guided-diffusion, https://github.com/nerdyrodent/CLIP-…
-
I'm getting an error from the text transformer component when trying to run the CLIP guided generation. Any ideas as to how I might approach debugging here?
```
CUDA_LAUNCH_BLOCKING=1 python sampl…
-
Traceback (most recent call last):
File "pic_disco.py", line 704, in run
File "guided_diffusion\gaussian_diffusion.py", line 900, in ddim_sample_loop_progressive
File "guided_diffusion\gaussi…
-
Is there anyway to preserve faces when inpainting?
will clip guided diffusion ever support inpainting as well or is that a dumb question?