abetlen / llama-cpp-python

Python bindings for llama.cpp
https://llama-cpp-python.readthedocs.io
MIT License
7.72k stars 931 forks source link

Windows - OpenBLAS (CPU) - Could NOT find BLAS (missing: BLAS_LIBRARIES) #1595

Open sardetushar opened 2 months ago

sardetushar commented 2 months ago

I am not able to install llama-cpp-python using

https://github.com/abetlen/llama-cpp-python?tab=readme-ov-file#installation-configuration

set CMAKE_ARGS="-DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS"

I get following error

pip install llama-cpp-python --verbose

 ..
...
............
  -- Check for working CXX compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.40.33807/bin/Hostx64/x64/cl.exe - skipped
  -- Detecting CXX compile features
  -- Detecting CXX compile features - done
  -- Found Git: C:/Program Files/Git/cmd/git.exe (found version "2.33.1.windows.1")
  -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
  -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
  -- Looking for pthread_create in pthreads
  -- Looking for pthread_create in pthreads - not found
  -- Looking for pthread_create in pthread
  -- Looking for pthread_create in pthread - not found
  -- Found Threads: TRUE
  -- Found OpenMP_C: -openmp (found version "2.0")
  -- Found OpenMP_CXX: -openmp (found version "2.0")
  -- Found OpenMP: TRUE (found version "2.0")
  -- OpenMP found
  -- Could NOT find BLAS (missing: BLAS_LIBRARIES)
  CMake Warning at vendor/llama.cpp/ggml/src/CMakeLists.txt:234 (message):
    BLAS not found, please refer to
    https://cmake.org/cmake/help/latest/module/FindBLAS.html#blas-lapack-vendors
    to set correct GGML_BLAS_VENDOR

-- Could NOT find BLAS (missing: BLAS_LIBRARIES) CMake Warning at vendor/llama.cpp/ggml/src/CMakeLists.txt:234 (message): BLAS not found, please refer to https://cmake.org/cmake/help/latest/module/FindBLAS.html#blas-lapack-vendors to set correct GGML_BLAS_VENDOR

I have tried many different options downloading https://github.com/OpenMathLib/OpenBLAS/releases - [OpenBLAS-0.3.27-x64.zip]

and changed manually llama_cpp_python-0.2.82.tar.gz and path vendor/llama.cpp/ggml/src/CMakeLists.txt

where i have added following in CMakeLists.txt

    include_directories("D:/downloads/OpenBLAS/include")
    add_compile_definitions(BLAS_LIBRARIES)
    add_link_options("D:/downloads/OpenBLAS/lib/libopenblas.dll.a")

But, no luck

tried installing openblas using conda pip install openblas but same error

I have windows 10 Enterprise Intel i5-1245U / RAM 32 GB

Also refered this - https://github.com/ggerganov/llama.cpp/issues/627 - same error

I need to use llama-cpp-python with OpenBLAS enabled.

ArmstrongSubero commented 2 weeks ago

I was getting the same problem. I put OpenBlas on C, add it to path....install nvidiva drivers 12.6, cmake 3.30.3 and use the following build flags in Powershell (not cmd):

$env:FORCE_CMAKE='1'; $env:CMAKE_ARGS='-DGGML_CUDA=on -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_FMA=off -DCMAKE_GENERATOR_TOOLSET=cuda="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6"'