Open sardetushar opened 2 months ago
I was getting the same problem. I put OpenBlas on C, add it to path....install nvidiva drivers 12.6, cmake 3.30.3 and use the following build flags in Powershell (not cmd):
$env:FORCE_CMAKE='1'; $env:CMAKE_ARGS='-DGGML_CUDA=on -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_FMA=off -DCMAKE_GENERATOR_TOOLSET=cuda="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6"'
I am not able to install llama-cpp-python using
https://github.com/abetlen/llama-cpp-python?tab=readme-ov-file#installation-configuration
set CMAKE_ARGS="-DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS"
I get following error
pip install llama-cpp-python --verbose
-- Could NOT find BLAS (missing: BLAS_LIBRARIES) CMake Warning at vendor/llama.cpp/ggml/src/CMakeLists.txt:234 (message): BLAS not found, please refer to https://cmake.org/cmake/help/latest/module/FindBLAS.html#blas-lapack-vendors to set correct GGML_BLAS_VENDOR
I have tried many different options downloading https://github.com/OpenMathLib/OpenBLAS/releases - [OpenBLAS-0.3.27-x64.zip]
and changed manually llama_cpp_python-0.2.82.tar.gz and path vendor/llama.cpp/ggml/src/CMakeLists.txt
where i have added following in CMakeLists.txt
But, no luck
tried installing openblas using conda pip install openblas but same error
I have windows 10 Enterprise Intel i5-1245U / RAM 32 GB
Also refered this - https://github.com/ggerganov/llama.cpp/issues/627 - same error
I need to use llama-cpp-python with OpenBLAS enabled.