gokayfem / ComfyUI_VLM_nodes

Custom ComfyUI nodes for Vision Language Models, Large Language Models, Image to Music, Text to Music, Consistent and Random Creative Prompt Generation
Apache License 2.0
384 stars 31 forks source link

Error occurred when executing LlavaCaptioner: this function takes at least 4 arguments (0 given) #24

Closed vivi-gomez closed 7 months ago

vivi-gomez commented 7 months ago

Whatever I try, I always get this error, venv/lib/python3.10/site-packages/llama_cpp/llama_chat_format.py", line 1959, in call self._llava_cpp.llava_image_embed_make_with_bytes( TypeError: this function takes at least 4 arguments (0 given)

I can someone point me any light to solve it

gokayfem commented 7 months ago

are you sure this node is from VLM nodes? can you send me the workfow picture?

https://github.com/ceruleandeep/ComfyUI-LLaVA-Captioner it can be from this nodes?

pselvana commented 7 months ago

The llama_cpp_python package is broken. llama-cpp-python==0.2.44 works but 0.2.50 fails with the same error outside of ComfyUI. I'm not sure which version started the breakage. Latest is always picked up for install.

   lcpp_version = latest_lamacpp()

But you can install it yourself and it will ignore installing latest it looks like:

imported = package_is_installed("llama-cpp-python") or package_is_installed("llama_cpp")

vivi-gomez commented 7 months ago

This is the solution for me. By now there is a llama-cpp-python==0.2.52 that works.

The llama_cpp_python package is broken. llama-cpp-python==0.2.44 works but 0.2.50 fails with the same error outside of ComfyUI. I'm not sure which version started the breakage. Latest is always picked up for install.

   lcpp_version = latest_lamacpp()

But you can install it yourself and it will ignore installing latest it looks like:

imported = package_is_installed("llama-cpp-python") or package_is_installed("llama_cpp")

vivi-gomez commented 7 months ago

I didnt even know I got installed that addon. However the comfyui workflow was stopping by LLava Sampler Simple which is identified as one of VLM_nodes.

Thank you

InstandID and Facereactor VLM_nodes

are you sure this node is from VLM nodes? can you send me the workfow picture?

https://github.com/ceruleandeep/ComfyUI-LLaVA-Captioner it can be from this nodes?

pselvana commented 7 months ago

This is the solution for me. By now there is a llama-cpp-python==0.2.52 that works.

Great! That's the first version that fixes it with this commit: https://github.com/abetlen/llama-cpp-python/commit/8383a9e5620f5df5a88f62da16813eac200dd706