Closed DDZ0920 closed 2 months ago
What I mean is, with every stroke, it needs to load this "model" again, which is a huge waste of time. What exactly is it, and is there any solution?
This usually means you don't have enough VRAM to keep everything on the GPU, so it has to load/unload for different parts of the process. And yes that ruins performance.
You can try to override ComfyUI's detected settings by passing --highvram
for example. But if you don't actually have enough VRAM, it will just crash.
Hey everyone, This plugin is absolutely amazing and super useful! Huge thanks to the developer! However, I've run into a few issues:
1️⃣Using the 1.5 model + LoRA on a 512x512 canvas works smoothly. However, when I switch to 1024, it needs to "load 1 new model" with every stroke. This significantly slows down the sync rate, making the generation time longer and preventing real-time display.
2️⃣When using the XL model, whether on 512 or 1024, it always shows "loading 1 new model." This really ruins the real-time experience.
Does anyone know why this is happening? Any solutions? Thanks a lot!