ceruleandeep / ComfyUI-LLaVA-Captioner

A ComfyUI extension for chatting with your images with LLaVA. Runs locally, no external services, no filter.
GNU General Public License v3.0
98 stars 11 forks source link

Error when using load img list #6

Open suede299 opened 6 months ago

suede299 commented 6 months ago

Error occurred when executing LlavaCaptioner: Failed to create llama_context File "D:\StableSwarmUI\dlbackend\comfy\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\StableSwarmUI\dlbackend\comfy\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\StableSwarmUI\dlbackend\comfy\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\StableSwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-LLaVA-Captioner\llava.py", line 229, in caption llava = wait_for_async(lambda: get_llava(model, mm_proj, -1)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\StableSwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-LLaVA-Captioner\llava.py", line 168, in wait_for_async loop.run_until_complete(run_async()) File "asyncio\base_events.py", line 653, in run_until_complete File "D:\StableSwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-LLaVA-Captioner\llava.py", line 158, in run_async r = await async_fn() ^^^^^^^^^^^^^^^^ File "D:\StableSwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-LLaVA-Captioner\llava.py", line 88, in get_llava llm = Llama( ^^^^^^ File "D:\StableSwarmUI\dlbackend\comfy\python_embeded\Lib\site-packages\llama_cpp\llama.py", line 327, in init self._ctx = _LlamaContext( ^^^^^^^^^^^^^^ File "D:\StableSwarmUI\dlbackend\comfy\python_embeded\Lib\site-packages\llama_cpp_internals.py", line 265, in init raise ValueError("Failed to create llama_context")

It looks like this workflow has a contextual relationship between the images before and after. So only manual single image processing?

suede299 commented 6 months ago

Replacing the image input with a single one again turns it into this model loading error again.

Error occurred when executing LlavaCaptioner:

Failed to load model from file: D:\StableSwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-LLaVA-Captioner\models\llava-v1.6-vicuna-13b.Q4_K_M.gguf

File "D:\StableSwarmUI\dlbackend\comfy\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\StableSwarmUI\dlbackend\comfy\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\StableSwarmUI\dlbackend\comfy\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\StableSwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-LLaVA-Captioner\llava.py", line 229, in caption llava = wait_for_async(lambda: get_llava(model, mm_proj, -1)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\StableSwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-LLaVA-Captioner\llava.py", line 168, in wait_for_async loop.run_until_complete(run_async()) File "asyncio\base_events.py", line 653, in run_until_complete File "D:\StableSwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-LLaVA-Captioner\llava.py", line 158, in run_async r = await async_fn() ^^^^^^^^^^^^^^^^ File "D:\StableSwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-LLaVA-Captioner\llava.py", line 88, in get_llava llm = Llama( ^^^^^^ File "D:\StableSwarmUI\dlbackend\comfy\python_embeded\Lib\site-packages\llama_cpp\llama.py", line 313, in init self._model = _LlamaModel( ^^^^^^^^^^^^ File "D:\StableSwarmUI\dlbackend\comfy\python_embeded\Lib\site-packages\llama_cpp_internals.py", line 55, in init raise ValueError(f"Failed to load model from file: {path_model}")