Open kaisadhar opened 1 month ago
Maybe this will help
https://github.com/zylon-ai/private-gpt/issues/1584
@kaisadhar
Recent versions of llama.cpp uses this flag -DLLAMA_CUDA=on
instead of -DLLAMA_CUBLAS=on
.
So try to pass that to your poetry install...
command.
Also, it requires package python3-dev
. So with WSL, I guess you can do this:
wsl
sudo apt-get install -y python3-dev
While executing the command
CMAKE_ARGS='-DLLAMA_CUBLAS=on' poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python
, I always get this error. Haven't found a solution for this problem. Ps: I am running this on WSL 2