gokayfem / ComfyUI_VLM_nodes

Custom ComfyUI nodes for Vision Language Models, Large Language Models, Image to Music, Text to Music, Consistent and Random Creative Prompt Generation
Apache License 2.0
382 stars 31 forks source link

GPU not utilized with standard llama-cpp-python #71

Closed drphero closed 4 months ago

drphero commented 5 months ago

When using loading the llava models, you can see that BLAS = 0 in the information printed in the console. This is because llama-cpp-python requires a special install if you want GPU capabilities. I'm not sure if this affects linux users or not, but it does affect windows users.

To properly install llama-cpp-python for nvidia GPUs:

pip install --no-cache-dir llama-cpp-python -C cmake.args="-DLLAMA_CUDA=ON" -vv

This of course requires Visual Studio to be installed with the "Desktop development with C++" option selected or the VS C++ Build Tools.

I found the --no-cache-dir necessary in order to get it to actually build it, so I'm not sure how this can be automatically done via the requirements.txt.

Now when using the llava models, you should see BLAS = 1 in the console.

gokayfem commented 5 months ago

we install it with pre-built wheels, it automatically supports cuda for linux and windows. recently pre-built wheel support added for macOS, i didnt add it to repo yet.