gokayfem / ComfyUI_VLM_nodes

Custom ComfyUI nodes for Vision Language Models, Large Language Models, Image to Music, Text to Music, Consistent and Random Creative Prompt Generation
Apache License 2.0
297 stars 23 forks source link

Fails on 7900XTX (ROCm v6) #56

Closed bigcat88 closed 3 months ago

bigcat88 commented 3 months ago

Hello.

https://github.com/gokayfem/ComfyUI_VLM_nodes/blob/d1770aa93187b1d0aa1b9b956fa168209dfe16b4/install_init.py#L44-L48

Here if gpu is True there is a replacement, but with AMD cuda_version will be None.

So at least checking it for None before accessing with replace will be a good start.

Of course after that it will fail downloading with command:

subprocess.CalledProcessError: Command '['/home/shurik/devAI/ComfyUI/.venv/bin/python', '-m', 'pip', 'install', 'llama-cpp-python', '--no-cache-dir', '--force-reinstall', '--no-deps', '--index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/AVX2/None']' returned non-zero exit status 1.

but I think I can install llama myself for rocm6.

gokayfem commented 3 months ago

for rocm gpus please use mac branch of the repo, also yes you can manually install it yourself.

https://github.com/abetlen/llama-cpp-python/

CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python

image

bigcat88 commented 3 months ago

May I ask why there is need to have two branches for this?

Detection between CUDA and Rocm can be written for ComfyUI by checking which PyTorch is installed

gokayfem commented 3 months ago

yes it can be done. this is a quick fix to your situation. mac branch exists because of the auto-gptq library doesnt support mac devices which is a requirement for internlm model. it also installs llama-cpp-python with directly pip install llama-cpp-python thats why i suggested that branch. it does not check if there is cuda or not.

bigcat88 commented 3 months ago

thank you for this node, I hope I will have fun time with what can be achieved with it :)