Closed julien-blanchon closed 6 months ago
can you change the function install_llama()
inside install_init.py
def install_llama():
"""Install llama-cpp-python with consideration for macOS or other OS specifics."""
imported = package_is_installed("llama-cpp-python") or package_is_installed("llama_cpp")
if not imported:
install_package("llama-cpp-python")
else:
print("llama-cpp-python is already installed.")
we can try to pip install directly see if it works.
I'm failing to install VLM_node with the following error
I have cuda enable on the machine but as I'm building in a docker it might not detect it and get the
manylinux_2_31_x86_64
plateform that doesn't exist on the llama_cpp_python repo