Open BlinkerHigh opened 2 weeks ago
Are you using InstantX Flux ControlNet ? Is that supported natively ? https://github.com/comfyanonymous/ComfyUI/issues/4567
Or are you using the ComfyUI-eesahesNodes, for the ControlNet ? The issue may be on their side https://github.com/EeroHeikkinen/ComfyUI-eesahesNodes/issues
getting same exact error, worked fine before update.
Are you using InstantX Flux ControlNet ? Is that supported natively ? #4567
Or are you using the ComfyUI-eesahesNodes, for the ControlNet ? The issue may be on their side https://github.com/EeroHeikkinen/ComfyUI-eesahesNodes/issues
Considering I have the same issue with Loras, it is unlikely that the controlnet node could be the problem. Also considering the fact that it was working fine before the update.
In my case, final outputs still eventually get produced with or without ControlNet, but now my ControlNet nodes always reload and reprocess their controlling image every time I queue a prompt, which takes a lot of time. I'm on an SD1.5 setup.
Correction: everything, not just ControlNet, reloads every time, including IPAdapter's temporary models. I'm guessing (75%) that this has something to do with the update's new RAM handling adjustments - so this might have nothing to do with the topic's issue.
Since this is probably different, I made a separate issue here
same for me... "git pull" today and OOM always, although loading a minimum memory Q2 flux model... yesterday everything was fine.
I'm using the Q8 gguf model
Not sure if related, but running on latest commit 9230f65 and getting Warning: Ran out of memory when regular VAE decoding, retrying with tiled VAE decoding.
every time.
I am thinking if might be more related to the Lora. Since the last update, I can no longer merge Loras into Flux either. Always just says Lora key not loaded a bunch of times in the console window, then loads the model then goes idle and the model doesn't save.
same for me... "git pull" today and OOM always, although loading a minimum memory Q2 flux model... yesterday everything was fine.
Was not related to Loras or QX model loaded.... tested without loras and with Q2-Q4-Q5, etc... with OOM always.. that was yesterday... don't know if there are fixes today.
Expected Behavior
Finish generation
Actual Behavior
Lora and Controlnet would stop mid generation, with an error message or not, as if it is just hanging. The workflow hasn't changed and it wasn't happening before the update.
Steps to Reproduce
Update confyui and use a lora or controlnet. Using following args: --lowvram --preview-method auto --use-split-cross-attention
Debug Logs
Other
No response