Closed bigcat88 closed 3 months ago
for rocm gpus please use mac branch of the repo, also yes you can manually install it yourself.
https://github.com/abetlen/llama-cpp-python/
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
May I ask why there is need to have two branches for this?
Detection between CUDA and Rocm can be written for ComfyUI by checking which PyTorch is installed
yes it can be done. this is a quick fix to your situation. mac branch exists because of the auto-gptq library doesnt support mac devices which is a requirement for internlm model. it also installs llama-cpp-python with directly pip install llama-cpp-python thats why i suggested that branch. it does not check if there is cuda or not.
thank you for this node, I hope I will have fun time with what can be achieved with it :)
Hello.
https://github.com/gokayfem/ComfyUI_VLM_nodes/blob/d1770aa93187b1d0aa1b9b956fa168209dfe16b4/install_init.py#L44-L48
Here if
gpu
isTrue
there is a replacement, but with AMDcuda_version
will beNone
.So at least checking it for
None
before accessing withreplace
will be a good start.Of course after that it will fail downloading with command:
subprocess.CalledProcessError: Command '['/home/shurik/devAI/ComfyUI/.venv/bin/python', '-m', 'pip', 'install', 'llama-cpp-python', '--no-cache-dir', '--force-reinstall', '--no-deps', '--index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/AVX2/None']' returned non-zero exit status 1.
but I think I can install
llama
myself for rocm6.