abetlen / llama-cpp-python

Python bindings for llama.cpp
https://llama-cpp-python.readthedocs.io
MIT License
8.06k stars 960 forks source link

Installation error I couldn't solve it in any way #1270

Open start-life opened 7 months ago

start-life commented 7 months ago

lava-cli.dir\linkLibs.rsp C:\w64devkit\bin/ld.exe: C:/w64devkit/bin/../lib/gcc/x86_64-w64-mingw32/13.2.0/../../../../x86_64-w64-mingw32/lib/../lib/libpthread.a(libwinpthread_la-thread.o):thread.c:(.text+0x103f): multiple definition of `pthread_self'; ../../libllama.dll.a(libllama_dll_d000877.o):(.text+0x0): first defined here collect2.exe: error: ld returned 1 exit status make[2]: [vendor\llama.cpp\examples\llava\CMakeFiles\llava-cli.dir\build.make:105: vendor/llama.cpp/examples/llava/llava-cli.exe] Error 1 make[2]: Leaving directory 'C:/Users/z5050/AppData/Local/Temp/tmpofjgm10t/build' make[1]: [CMakeFiles\Makefile2:388: vendor/llama.cpp/examples/llava/CMakeFiles/llava-cli.dir/all] Error 2 make[1]: Leaving directory 'C:/Users/z5050/AppData/Local/Temp/tmpofjgm10t/build' make: *** [Makefile:135: all] Error 2

  *** CMake build failed
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for llama-cpp-python Failed to build llama-cpp-python ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

daniel-lewis-ab commented 7 months ago

I'm getting this:

pip install llama-cpp-python Defaulting to user installation because normal site-packages is not writeable Collecting llama-cpp-python Using cached llama_cpp_python-0.2.56.tar.gz (36.9 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Installing backend dependencies ... done Preparing metadata (pyproject.toml) ... done Requirement already satisfied: diskcache>=5.6.1 in /home/dl/.local/lib/python3.10/site-packages (from llama-cpp-python) (5.6.3) Requirement already satisfied: typing-extensions>=4.5.0 in /home/dl/.local/lib/python3.10/site-packages (from llama-cpp-python) (4.9.0) Requirement already satisfied: jinja2>=2.11.3 in /home/dl/.local/lib/python3.10/site-packages (from llama-cpp-python) (3.1.2) Requirement already satisfied: numpy>=1.20.0 in /home/dl/.local/lib/python3.10/site-packages (from llama-cpp-python) (1.26.4) Requirement already satisfied: MarkupSafe>=2.0 in /home/dl/.local/lib/python3.10/site-packages (from jinja2>=2.11.3->llama-cpp-python) (2.1.3) Building wheels for collected packages: llama-cpp-python Building wheel for llama-cpp-python (pyproject.toml) ... error error: subprocess-exited-with-error

× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [100 lines of output] scikit-build-core 0.8.2 using CMake 3.28.3 (wheel) Configuring CMake... loading initial cache file /tmp/tmp5eh0t1yw/build/CMakeInit.txt -- The C compiler identification is GNU 11.4.0 -- The CXX compiler identification is GNU 11.4.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found Git: /usr/bin/git (found version "2.34.1") -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- x86 detected CMake Warning (dev) at CMakeLists.txt:21 (install): Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION. This warning is for project developers. Use -Wno-dev to suppress it.

  CMake Warning (dev) at CMakeLists.txt:30 (install):
    Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
  This warning is for project developers.  Use -Wno-dev to suppress it.

  -- Configuring done (0.3s)
  -- Generating done (0.0s)
  -- Build files have been written to: /tmp/tmp5eh0t1yw/build
  *** Building project with Ninja...
  Change Dir: '/tmp/tmp5eh0t1yw/build'

  Run Build Command(s): /tmp/pip-build-env-sfc2cu1e/normal/local/lib/python3.10/dist-packages/ninja/data/bin/ninja -v
  [1/22] cd /tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp && /tmp/pip-build-env-sfc2cu1e/normal/local/lib/python3.10/dist-packages/cmake/data/bin/cmake -DMSVC= -DCMAKE_C_COMPILER_VERSION=11.4.0 -DCMAKE_C_COMPILER_ID=GNU -DCMAKE_VS_PLATFORM_NAME= -DCMAKE_C_COMPILER=/usr/bin/cc -P /tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/common/../scripts/gen-build-info-cpp.cmake
  -- Found Git: /usr/bin/git (found version "2.34.1")
  [2/22] /usr/bin/c++ -D_GNU_SOURCE -D_XOPEN_SOURCE=600  -O3 -DNDEBUG -std=gnu++11 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wno-format-truncation -Wextra-semi -march=native -MD -MT vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o -c /tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/common/build-info.cpp
  [3/22] /usr/bin/c++ -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/common/. -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu++11 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wno-format-truncation -Wextra-semi -march=native -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o -c /tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/common/console.cpp
  [4/22] /usr/bin/cc -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -c /tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/ggml-alloc.c
  [5/22] /usr/bin/cc -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -c /tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/ggml-backend.c
  [6/22] /usr/bin/c++ -DLLAMA_BUILD -DLLAMA_SHARED -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/examples/llava/. -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/examples/llava/../.. -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/examples/llava/../../common -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wno-cast-qual -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o -c /tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/examples/llava/llava.cpp
  [7/22] /usr/bin/c++ -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/common/. -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu++11 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wno-format-truncation -Wextra-semi -march=native -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/grammar-parser.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/grammar-parser.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/grammar-parser.cpp.o -c /tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/common/grammar-parser.cpp
  [8/22] /usr/bin/c++ -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/common/. -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu++11 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wno-format-truncation -Wextra-semi -march=native -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/sampling.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/sampling.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/sampling.cpp.o -c /tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/common/sampling.cpp
  [9/22] /usr/bin/c++  -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/common/. -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/. -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/examples/llava/. -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/examples/llava/../.. -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/examples/llava/../../common -O3 -DNDEBUG -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava-cli.dir/llava-cli.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava-cli.dir/llava-cli.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava-cli.dir/llava-cli.cpp.o -c /tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/examples/llava/llava-cli.cpp
  [10/22] /usr/bin/c++ -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/common/. -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu++11 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wno-format-truncation -Wextra-semi -march=native -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/train.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/train.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/train.cpp.o -c /tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/common/train.cpp
  [11/22] /usr/bin/cc -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -c /tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/ggml-quants.c
  [12/22] /usr/bin/c++ -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/common/. -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu++11 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wno-format-truncation -Wextra-semi -march=native -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o -c /tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/common/common.cpp
  [13/22] /usr/bin/cc -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -march=native -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -c /tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/ggml.c
  [14/22] : && /tmp/pip-build-env-sfc2cu1e/normal/local/lib/python3.10/dist-packages/cmake/data/bin/cmake -E rm -f vendor/llama.cpp/libggml_static.a && /usr/bin/ar qc vendor/llama.cpp/libggml_static.a  vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o && /usr/bin/ranlib vendor/llama.cpp/libggml_static.a && :
  [15/22] : && /usr/bin/cc -fPIC -O3 -DNDEBUG   -shared -Wl,-soname,libggml_shared.so -o vendor/llama.cpp/libggml_shared.so vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o   && :
  [16/22] /usr/bin/c++ -DLLAMA_BUILD -DLLAMA_SHARED -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/examples/llava/. -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/examples/llava/../.. -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/examples/llava/../../common -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wno-cast-qual -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -c /tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/examples/llava/clip.cpp
  [17/22] : && /tmp/pip-build-env-sfc2cu1e/normal/local/lib/python3.10/dist-packages/cmake/data/bin/cmake -E rm -f vendor/llama.cpp/examples/llava/libllava_static.a && /usr/bin/ar qc vendor/llama.cpp/examples/llava/libllava_static.a  vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o && /usr/bin/ranlib vendor/llama.cpp/examples/llava/libllava_static.a && :
  [18/22] /usr/bin/c++ -DLLAMA_BUILD -DLLAMA_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dllama_EXPORTS -I/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu++11 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wno-format-truncation -Wextra-semi -march=native -MD -MT vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -MF vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o.d -o vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -c /tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/vendor/llama.cpp/llama.cpp
  [19/22] : && /usr/bin/c++ -fPIC -O3 -DNDEBUG   -shared -Wl,-soname,libllama.so -o vendor/llama.cpp/libllama.so vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o   && :
  [20/22] : && /tmp/pip-build-env-sfc2cu1e/normal/local/lib/python3.10/dist-packages/cmake/data/bin/cmake -E rm -f vendor/llama.cpp/common/libcommon.a && /usr/bin/ar qc vendor/llama.cpp/common/libcommon.a  vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/sampling.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/grammar-parser.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/train.cpp.o && /usr/bin/ranlib vendor/llama.cpp/common/libcommon.a && :
  [21/22] : && /usr/bin/c++ -fPIC -O3 -DNDEBUG   -shared -Wl,-soname,libllava.so -o vendor/llama.cpp/examples/llava/libllava.so vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o  -Wl,-rpath,/tmp/tmp5eh0t1yw/build/vendor/llama.cpp:  vendor/llama.cpp/libllama.so && :
  [22/22] : && /usr/bin/c++ -O3 -DNDEBUG  vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava-cli.dir/llava-cli.cpp.o -o vendor/llama.cpp/examples/llava/llava-cli  -Wl,-rpath,/tmp/tmp5eh0t1yw/build/vendor/llama.cpp:  vendor/llama.cpp/common/libcommon.a  vendor/llama.cpp/libllama.so && :

  *** Installing project into wheel...
  -- Install configuration: "Release"
  -- Installing: /tmp/tmp5eh0t1yw/wheel/platlib/lib/libggml_shared.so
  -- Installing: /tmp/tmp5eh0t1yw/wheel/platlib/lib/cmake/Llama/LlamaConfig.cmake
  -- Installing: /tmp/tmp5eh0t1yw/wheel/platlib/lib/cmake/Llama/LlamaConfigVersion.cmake
  -- Installing: /tmp/tmp5eh0t1yw/wheel/platlib/include/ggml.h
  -- Installing: /tmp/tmp5eh0t1yw/wheel/platlib/include/ggml-alloc.h
  -- Installing: /tmp/tmp5eh0t1yw/wheel/platlib/include/ggml-backend.h
  -- Installing: /tmp/tmp5eh0t1yw/wheel/platlib/lib/libllama.so
  -- Installing: /tmp/tmp5eh0t1yw/wheel/platlib/include/llama.h
  -- Installing: /tmp/tmp5eh0t1yw/wheel/platlib/bin/convert.py
  -- Installing: /tmp/tmp5eh0t1yw/wheel/platlib/bin/convert-lora-to-ggml.py
  -- Installing: /tmp/tmp5eh0t1yw/wheel/platlib/llama_cpp/libllama.so
  -- Installing: /tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/llama_cpp/libllama.so
  -- Installing: /tmp/tmp5eh0t1yw/wheel/platlib/lib/libllava.so
  -- Set non-toolchain portion of runtime path of "/tmp/tmp5eh0t1yw/wheel/platlib/lib/libllava.so" to ""
  -- Installing: /tmp/tmp5eh0t1yw/wheel/platlib/bin/llava-cli
  -- Set non-toolchain portion of runtime path of "/tmp/tmp5eh0t1yw/wheel/platlib/bin/llava-cli" to ""
  -- Installing: /tmp/tmp5eh0t1yw/wheel/platlib/llama_cpp/libllava.so
  -- Set non-toolchain portion of runtime path of "/tmp/tmp5eh0t1yw/wheel/platlib/llama_cpp/libllava.so" to ""
  -- Installing: /tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/llama_cpp/libllava.so
  -- Set non-toolchain portion of runtime path of "/tmp/pip-install-bpwnzu8i/llama-cpp-python_eb906da8f7134660b1ec2829a982437d/llama_cpp/libllava.so" to ""
  *** Making wheel...
  Traceback (most recent call last):
    File "/usr/lib/python3/dist-packages/pip/_vendor/pep517/in_process/_in_process.py", line 363, in <module>
      main()
    File "/usr/lib/python3/dist-packages/pip/_vendor/pep517/in_process/_in_process.py", line 345, in main
      json_out['return_val'] = hook(**hook_input['kwargs'])
    File "/usr/lib/python3/dist-packages/pip/_vendor/pep517/in_process/_in_process.py", line 261, in build_wheel
      return _build_backend().build_wheel(wheel_directory, config_settings,
    File "/tmp/pip-build-env-sfc2cu1e/overlay/local/lib/python3.10/dist-packages/scikit_build_core/build/__init__.py", line 31, in build_wheel
      return _build_wheel_impl(
    File "/tmp/pip-build-env-sfc2cu1e/overlay/local/lib/python3.10/dist-packages/scikit_build_core/build/wheel.py", line 343, in _build_wheel_impl
      mapping = packages_to_file_mapping(
    File "/tmp/pip-build-env-sfc2cu1e/overlay/local/lib/python3.10/dist-packages/scikit_build_core/build/_pathutil.py", line 46, in packages_to_file_mapping
      for filepath in each_unignored_file(
    File "/tmp/pip-build-env-sfc2cu1e/overlay/local/lib/python3.10/dist-packages/scikit_build_core/build/_file_processor.py", line 44, in each_unignored_file
      exclude_spec = pathspec.GitIgnoreSpec.from_lines(exclude_lines)
  AttributeError: module 'pathspec' has no attribute 'GitIgnoreSpec'
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for llama-cpp-python Failed to build llama-cpp-python ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

daniel-lewis-ab commented 7 months ago

I just rebuilt venv and moved on.

Huasushis commented 5 months ago

I just rebuilt venv and moved on.

I have met the same situation. But I didn't move on after rebuiding the venv.