Closed ranpo closed 2 months ago
Thank you, yes the previous version never unloaded the model, if you want it to go faster you can use keep_alive to be like before but it takes more memory.
Alright, thank you very much for a great custom node :)
thank you
EDIT: Sorry, I tried something else and found the "problem" for the slow speed. The "keep alive" is off by default and after turning it on the speed is back to normal as how it was in previous version. I guess in previous version it's on by default?
But as you can see on the log, the first run was fast (16 secs, this includes the time to load the model). subsequent run, even with model loading and unloading (because keep alive is off), it took around 100 secs.
Hello. I just upgraded comfyui, ComfyUI-IF_AI_tools, and ollama. When I try it, it ran normally when I first executed it. Then subsequent execution, without changing the image, it's getting extremely slow.
I tried to run llava directly on ollama (not comfyui) , the speed is normal (very quick).
The node I use are IF Chat Prompt, and IF Image Prompt. All settings are default except the model (llava 7b) and profile (none and IF_PromptMKR_IMG)
Also, is the node supposed to keep executed even when nothing is connected to it?
Thank you :)