Open congson1293 opened 4 months ago
I had the same problem because CUBLABS is deprecated, you can build 0.2.79 using CUBLABS but for versions >=0.2.8, you should use CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python --upgrade --force-reinstall
I think that you have same problem as this issue https://github.com/abetlen/llama-cpp-python/issues/1573#issuecomment-2214088672
I had the same problem because CUBLABS is deprecated, you can build 0.2.79 using CUBLABS but for versions >=0.2.8, you should use
CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python --upgrade --force-reinstall
I think that you have same problem as this issue #1573 (comment)
I tried installing it on the server with RTX3090 card but it still raised the same issue.
I installed
llama-cpp-python
on the system with: CPU AMD EPYC 7542 GPU V100 But it raised the exception shown in the image below: