gokayfem / ComfyUI_VLM_nodes

Custom ComfyUI nodes for Vision Language Models, Large Language Models, Image to Music, Text to Music, Consistent and Random Creative Prompt Generation
Apache License 2.0
384 stars 31 forks source link

Failed to install - is not a valid wheel filename #33

Closed julien-blanchon closed 6 months ago

julien-blanchon commented 7 months ago

I'm failing to install VLM_node with the following error

Installing llama-cpp-python...
ERROR: llama_cpp_python-0.2.55-manylinux_2_31_x86_64.whl is not a valid wheel filename.
[notice] A new release of pip is available: 23.3.2 -> 24.0
[notice] To update, run: pip install --upgrade pip
Traceback (most recent call last):
File "/comfyui/nodes.py", line 1899, in load_custom_node
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/comfyui/custom_nodes/comfyui_vlm_nodes/__init__.py", line 44, in <module>
install_llama(system_info)
File "/comfyui/custom_nodes/comfyui_vlm_nodes/install_init.py", line 93, in install_llama
install_package("llama-cpp-python", custom_command=custom_command)
File "/comfyui/custom_nodes/comfyui_vlm_nodes/install_init.py", line 73, in install_package
subprocess.check_call(command)
File "/usr/local/lib/python3.11/subprocess.py", line 413, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/local/bin/python', '-m', 'pip', 'install', 'llama-cpp-python', '--no-cache-dir', 'https://github.com/abetlen/llama-cpp-python/releases/download/v0.2.55/llama_cpp_python-0.2.55-manylinux_2_31_x86_64.whl']' returned non-zero exit status 1.
Cannot import /comfyui/custom_nodes/comfyui_vlm_nodes module for custom nodes: Command '['/usr/local/bin/python', '-m', 'pip', 'install', 'llama-cpp-python', '--no-cache-dir', 'https://github.com/abetlen/llama-cpp-python/releases/download/v0.2.55/llama_cpp_python-0.2.55-manylinux_2_31_x86_64.whl']' returned non-zero exit status 1.

I have cuda enable on the machine but as I'm building in a docker it might not detect it and get the manylinux_2_31_x86_64 plateform that doesn't exist on the llama_cpp_python repo

gokayfem commented 7 months ago

can you change the function install_llama() inside install_init.py

def install_llama():
    """Install llama-cpp-python with consideration for macOS or other OS specifics."""
    imported = package_is_installed("llama-cpp-python") or package_is_installed("llama_cpp")
    if not imported:
        install_package("llama-cpp-python")

    else:
        print("llama-cpp-python is already installed.")

we can try to pip install directly see if it works.