abetlen / llama-cpp-python

Python bindings for llama.cpp
https://llama-cpp-python.readthedocs.io
MIT License
8.14k stars 969 forks source link

Why is this not working for the current release. UNABLE TO USE GPU #1781

Open AnirudhJM24 opened 1 month ago

AnirudhJM24 commented 1 month ago
          I got it to works just like instruction, I'm using CUDA 12.3:

set CMAKE_ARGS="-DLLAMA_CUBLAS=on" && set FORCE_CMAKE=1 && pip install --no-cache-dir llama-cpp-python==0.2.90 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu123

Originally posted by @BinhPQ2 in https://github.com/abetlen/llama-cpp-python/issues/576#issuecomment-2379861701

BinhPQ2 commented 1 month ago

What is your CUDA version?

And you can try to update your Visual Studio Built Tool to see if it will solve the problem! https://visualstudio.microsoft.com/downloads/?q=build+tools