Custom ComfyUI nodes for Vision Language Models, Large Language Models, Image to Music, Text to Music, Consistent and Random Creative Prompt Generation
Apache License 2.0
297
stars
23
forks
source link
Load the wrong GGUF model causes (core dump) crash #84
I've downloaded two gguf model files and put them in the folder as instructed.
Then I added two nodes in Comfy as shown in the picture to load the models.
However, I didn't pay attention to which model files I should select, I thought by default it'll pick the right model file. And when I run the workflow, the whole system crashed with (core dump).
It took me quite a while to discover that the "Llava Clip Loader" was trying to load the main Llava model, in my case it was llava-v1.6-mistral-7b.Q3_K_XS.gguf, and if I chose the right one, in my case it was mmproj-model-f16.gguf, it works.
I think it'll be great to choose the right one by default, or have some error check instead of relying on llama-cpp. C++ is dangerous.
I've downloaded two gguf model files and put them in the folder as instructed.
Then I added two nodes in Comfy as shown in the picture to load the models.![image](https://github.com/gokayfem/ComfyUI_VLM_nodes/assets/14249458/361b5a21-ecf0-431d-9073-cb126b302259)
However, I didn't pay attention to which model files I should select, I thought by default it'll pick the right model file. And when I run the workflow, the whole system crashed with (core dump).
It took me quite a while to discover that the "Llava Clip Loader" was trying to load the main Llava model, in my case it was
llava-v1.6-mistral-7b.Q3_K_XS.gguf
, and if I chose the right one, in my case it wasmmproj-model-f16.gguf
, it works.I think it'll be great to choose the right one by default, or have some error check instead of relying on
llama-cpp
. C++ is dangerous.