-
First of all batch generation would be very cool to be able to do.
Is there a way that you can implement any form of upscaling here ? Not too complicated just basic 2x or 4x upscaling using any mod…
-
Hi!
can someone please tell me how can i convert fp32 diffuser models [the ones that have unet vae tokenizer etc.. folders that are used for stable diffusion] to fp16?
i can't find any tutorial or…
-
Hi,
i have low CPU core, but i have big GPU card, how do i use this only for GPU card? my gpu rtx 4090
thanks adv.
-
Could You make a Google Colab Link?
-
Is it able to run under the CPU?It would be nice if it could be optimized to run under the CPU, I think stable diffusion has some that can run under the CPU, and if it maybe it would be great
-
anything-v5 + latent-consistency/lcm-lora-sdxl:
```
***** Init LCM-LoRA pipeline - stablediffusionapi/anything-v5 *****
Loading pipeline components...: 14%|████████████████████████▍ …
-
Hello!
thank u for this!
i'm a huge fan of onnx as it is very cpu friendly. could u please try and make AnimateDiff to work with onnx too? it will be great to have text 2 gif and image 2 gif with An…