zylon-ai / private-gpt

Interact with your documents using the power of GPT, 100% privately, no data leaks
https://docs.privategpt.dev
Apache License 2.0
52.45k stars 7.04k forks source link

Installing LLAMA CUDA libraries and Python bindings ERROR #1900

Open kaisadhar opened 1 month ago

kaisadhar commented 1 month ago

While executing the command CMAKE_ARGS='-DLLAMA_CUBLAS=on' poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python, I always get this error. Haven't found a solution for this problem. image image Ps: I am running this on WSL 2

AlexPerkin commented 3 weeks ago

Maybe this will help https://github.com/zylon-ai/private-gpt/issues/1584

neofob commented 5 days ago

@kaisadhar Recent versions of llama.cpp uses this flag -DLLAMA_CUDA=on instead of -DLLAMA_CUBLAS=on. So try to pass that to your poetry install... command.

Also, it requires package python3-dev. So with WSL, I guess you can do this:

wsl
sudo apt-get install -y python3-dev