gokayfem / ComfyUI_VLM_nodes

Custom ComfyUI nodes for Vision Language Models, Large Language Models, Image to Music, Text to Music, Consistent and Random Creative Prompt Generation
Apache License 2.0
297 stars 23 forks source link

IMPORT FAILED in ComfyUI #76

Open aifuzz59 opened 2 months ago

aifuzz59 commented 2 months ago

The Import failed for theses nodes

File "", line 940, in exec_module File "", line 241, in _call_with_frames_removed File "D:\ComfyUI_Training\ComfyUI_windows_portable_nvidia_cu121_or_cpu (4)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_VLM_nodes__init__.py", line 46, in check_requirements_installed(llama_cpp_agent_path) File "D:\ComfyUI_Training\ComfyUI_windows_portable_nvidia_cu121_or_cpu (4)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_VLM_nodes__init__.py", line 35, in check_requirements_installed subprocess.check_call([sys.executable, '-s', '-m', 'pip', 'install', *missing_packages]) File "subprocess.py", line 413, in check_call subprocess.CalledProcessError: Command '['D:\ComfyUI_Training\ComfyUI_windows_portable_nvidia_cu121_or_cpu (4)\ComfyUI_windows_portable\python_embeded\python.exe', '-s', '-m', 'pip', 'install', 'llama-cpp-agent', 'mkdocs', 'mkdocs-material', 'mkdocstrings[python]', 'docstring-parser']' returned non-zero exit status 2.

Cannot import D:\ComfyUI_Training\ComfyUI_windows_portable_nvidia_cu121_or_cpu (4)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_VLM_nodes module for custom nodes: Command '['D:\ComfyUI_Training\ComfyUI_windows_portable_nvidia_cu121_or_cpu (4)\ComfyUI_windows_portable\python_embeded\python.exe', '-s', '-m', 'pip', 'install', 'llama-cpp-agent', 'mkdocs', 'mkdocs-material', 'mkdocstrings[python]', 'docstring-parser']' returned non-zero exit status 2.

I have updated ComfyUI and it still wont work. Any ideas?

gokayfem commented 2 months ago

change the llama-cpp-agent version inside cpp_agent_req.txt to llama-cpp-agent==0.0.17

ricperry commented 2 weeks ago

change the llama-cpp-agent version inside cpp_agent_req.txt to llama-cpp-agent==0.0.17

This doesn't work on Linux + ROCm

Is there a way you can hook in to the ollama api?