snexus / llm-search

Querying local documents, powered by LLM
MIT License
421 stars 51 forks source link

CMake failed #89

Closed bradphelan closed 4 months ago

bradphelan commented 4 months ago

I tried to follow the build/install instructions. It failed whilst building vender/llama

Building wheels for collected packages: llmsearch, llama-cpp-python
  Building wheel for llmsearch (pyproject.toml) ... done
  Created wheel for llmsearch: filename=llmsearch-0.6.1.dev0+g5243360.d20240214-py3-none-any.whl size=51977 sha256=88d36d6f58cf861648ea59aca6b622c60902d85e9ed073303d41fe789458c350
  Stored in directory: /home/brad/.cache/pip/wheels/19/34/30/d9d88eb34ce7925c34d871c614b71065630f4ba7aff90abcf6
  Building wheel for llama-cpp-python (pyproject.toml) ... error
  error: subprocess-exited-with-error

  × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [44 lines of output]
      *** scikit-build-core 0.8.0 using CMake 3.28.3 (wheel)
      *** Configuring CMake...
      loading initial cache file /tmp/tmpof6qnuza/build/CMakeInit.txt
      -- The C compiler identification is GNU 11.3.0
      -- The CXX compiler identification is GNU 11.3.0
      -- Detecting C compiler ABI info
      -- Detecting C compiler ABI info - done
      -- Check for working C compiler: /usr/bin/cc - skipped
      -- Detecting C compile features
      -- Detecting C compile features - done
      -- Detecting CXX compiler ABI info
      -- Detecting CXX compiler ABI info - done
      -- Check for working CXX compiler: /usr/bin/c++ - skipped
      -- Detecting CXX compile features
      -- Detecting CXX compile features - done
      -- Found Git: /usr/bin/git (found version "2.41.0")
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
      -- Found Threads: TRUE
      -- Could not find nvcc, please set CUDAToolkit_ROOT.
      CMake Warning at vendor/llama.cpp/CMakeLists.txt:381 (message):
        cuBLAS not found

      -- CUDA host compiler is GNU
      CMake Error at vendor/llama.cpp/CMakeLists.txt:784 (get_flags):
        get_flags Function invoked with incorrect arguments for function named:
        get_flags

      -- ccache found, compilation results will be cached. Disable with LLAMA_CCACHE=OFF.
      -- CMAKE_SYSTEM_PROCESSOR: x86_64
      -- x86 detected
      CMake Warning (dev) at CMakeLists.txt:21 (install):
        Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
      This warning is for project developers.  Use -Wno-dev to suppress it.

      CMake Warning (dev) at CMakeLists.txt:30 (install):
        Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
      This warning is for project developers.  Use -Wno-dev to suppress it.

      -- Configuring incomplete, errors occurred!

      *** CMake configuration failed
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for llama-cpp-python
Successfully built llmsearch
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

Any ideas?

bradphelan commented 4 months ago

I hadn't installed the nvidia cuda support. Installed it and now the install finishes.