abetlen / llama-cpp-python

Python bindings for llama.cpp
https://llama-cpp-python.readthedocs.io
MIT License
8.05k stars 958 forks source link

ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects #789

Open himanshus110 opened 1 year ago

himanshus110 commented 1 year ago

Building wheel for llama-cpp-python (pyproject.toml) ... error error: subprocess-exited-with-error

× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [20 lines of output] scikit-build-core 0.5.1 using CMake 3.27.6 (wheel) Configuring CMake... 2023-10-03 20:07:26,143 - scikit_build_core - WARNING - Can't find a Python library, got libdir=None, ldlibrary=None, multiarch=None, masd=None loading initial cache file C:\Users\h02si\AppData\Local\Temp\tmp95lm135w\build\CMakeInit.txt -- Building for: NMake Makefiles CMake Error at CMakeLists.txt:3 (project): Running

     'nmake' '-?'

    failed with:

     The system cannot find the file specified

  CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage
  CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage
  -- Configuring incomplete, errors occurred!

  *** CMake configuration failed
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for llama-cpp-python Failed to build llama-cpp-python ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

nsbuttar commented 1 year ago

Keen to know this one as well. Having the same issue.

IgorBeHolder commented 1 year ago

macos cpu only , the same error after make docker, make server-run, make build

Click to toggle! 71.90 Building wheels for collected packages: llama_cpp_python, paginate 71.90 Building editable for llama_cpp_python (pyproject.toml): started 73.01 Building editable for llama_cpp_python (pyproject.toml): finished with status 'error' 73.04 error: subprocess-exited-with-error 73.04 73.04 × Building editable for llama_cpp_python (pyproject.toml) did not run successfully. 73.04 │ exit code: 1 73.04 ╰─> [48 lines of output] 73.04 *** scikit-build-core 0.5.1 using CMake 3.27.6 (editable) 73.04 *** Configuring CMake... 73.04 loading initial cache file /tmp/tmpfwrnp7m0/build/CMakeInit.txt 73.04 -- The C compiler identification is GNU 10.2.1 73.04 -- The CXX compiler identification is GNU 10.2.1 73.04 -- Detecting C compiler ABI info 73.04 -- Detecting C compiler ABI info - done 73.04 -- Check for working C compiler: /usr/bin/cc - skipped 73.04 -- Detecting C compile features 73.04 -- Detecting C compile features - done 73.04 -- Detecting CXX compiler ABI info 73.04 -- Detecting CXX compiler ABI info - done 73.04 -- Check for working CXX compiler: /usr/bin/c++ - skipped 73.04 -- Detecting CXX compile features 73.04 -- Detecting CXX compile features - done 73.04 -- Could NOT find Git (missing: GIT_EXECUTABLE) 73.04 CMake Warning at vendor/llama.cpp/scripts/build-info.cmake:16 (message): 73.04 Git not found. Build info will not be accurate. 73.04 Call Stack (most recent call first): 73.04 vendor/llama.cpp/CMakeLists.txt:108 (include) 73.04 73.04 73.04 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD 73.04 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed 73.04 -- Check if compiler accepts -pthread 73.04 -- Check if compiler accepts -pthread - yes 73.04 -- Found Threads: TRUE 73.04 -- CMAKE_SYSTEM_PROCESSOR: x86_64 73.04 -- x86 detected 73.04 CMake Warning (dev) at CMakeLists.txt:18 (install): 73.04 Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION. 73.04 This warning is for project developers. Use -Wno-dev to suppress it. 73.04 73.04 CMake Warning (dev) at CMakeLists.txt:27 (install): 73.04 Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION. 73.04 This warning is for project developers. Use -Wno-dev to suppress it. 73.04 73.04 -- Configuring done (0.9s) 73.04 -- Generating done (0.0s) 73.04 -- Build files have been written to: /tmp/tmpfwrnp7m0/build 73.04 *** Building project with Ninja... 73.04 Change Dir: '/tmp/tmpfwrnp7m0/build' 73.04 73.04 Run Build Command(s): /usr/bin/ninja -v 73.04 ninja: error: '/.git/modules/llama-cpp-python/modules/vendor/llama.cpp/index', needed by '/app/vendor/llama.cpp/build-info.h', missing and no known rule to make it 73.04 73.04 73.04 *** CMake build failed 73.04 [end of output] 73.04 73.04 note: This error originates from a subprocess, and is likely not a problem with pip. 73.04 ERROR: Failed building editable for llama_cpp_python 73.04 Building wheel for paginate (setup.py): started 73.33 Building wheel for paginate (setup.py): finished with status 'done' 73.33 Created wheel for paginate: filename=paginate-0.5.6-py3-none-any.whl size=12666 sha256=d4af1da02a1621dcddc67c7f14afbff7d34ad6d20aef5f63e62063783fe399c8 73.33 Stored in directory: /root/.cache/pip/wheels/03/20/4e/4925d1027f4b377bef23999a1a5eaa438339b741a6a2f3ad39 73.33 Successfully built paginate 73.33 Failed to build llama_cpp_python 73.33 ERROR: Could not build wheels for llama_cpp_python, which is required to install pyproject.toml-based projects 73.89 make: *** [Makefile:10: deps] Error 1 ------ Dockerfile:24 -------------------- 22 | RUN python3 -m pip install --upgrade pip 23 | 24 | >>> RUN make deps && make build && make clean 25 | 26 | # Set environment variable for the host -------------------- ERROR: failed to solve: process "/bin/sh -c make deps && make build && make clean" did not complete successfully: exit code: 2 make: *** [docker] Error 1
himanshus110 commented 1 year ago

I'm using conda environment. I've set CMAKE_C_COMPILER and CMAKE_CXX_COMPILER variables but still this error shows that these variables aren't set

brownsnow commented 1 year ago

Arch linux here and I get this problem as well

IgorBeHolder commented 1 year ago

I clone the llama-cpp submodule directly in Dockerfile:

Click to toggle! FROM python:3.11-slim-bullseye RUN apt-get update && apt-get upgrade -y && apt-get install -y --no-install-recommends \ python3 \ python3-pip \ ninja-build \ libopenblas-dev \ build-essential RUN apt-get update && apt-get install -y git RUN mkdir /app RUN git clone https://github.com/abetlen/llama-cpp-python.git /app WORKDIR /app RUN git submodule update --init --recursive RUN python3 -m pip install --upgrade pip ENV CMAKE_ARGS=${CMAKE_ARGS} RUN --mount=type=cache,target=/root/.cache/pip python3 -m pip install -e ".[all]" COPY --chmod=755 run.sh /run.sh CMD ["/bin/sh", "/run.sh"]
and make docker-compose.yml close to Dockerfile in docker/simple:
Click to toggle! version: '3.9' name: llama networks: anything-llm: driver: bridge services: llama-llm: container_name: llama-cont image: llama:latest platform: linux/amd64 environment: - HOST=0.0.0.0 - PORT=3003 - CMAKE_ARGS="-DLLAMA_OPENBLAS=on -DLLAMA_OPENBLAS_VENDOR=blis" build: context: . dockerfile: ./Dockerfile args: DOCKER_BUILDKIT: 1 # enable buildkit volumes: - "../../models:/app/models" # mount the bentoml folder ports: - "3003:3003" networks: - anything-llm ulimits: memlock: soft: -1 hard: -1
and run.sh:
Click to toggle! python3 -m llama_cpp.server \ --model models/llama-2-7b-chat.Q4_K_M.gguf \ --host $HOSTNAME \ --port $PORT \ --n_ctx 2048
Hope it will help somebody
sujeendran commented 1 year ago

@himanshus110 - I managed to install it in windows with few changes. Could you try pulling from this fork and follow the extra steps mentioned here?

I have created a PR #848 to include it in main repo. Hopefully it works for you too.

NOTE: OpenBLAS can still be tricky. You can try without -DLLAMA_OPENBLAS=on argument too if it complains about OpenBLAS.

lastYoueven commented 4 months ago

`pip install llama-cpp-python
Collecting llama-cpp-python Using cached llama_cpp_python-0.2.79.tar.gz (50.3 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Requirement already satisfied: typing-extensions>=4.5.0 in e:\stable-diffusion-webui\extensions\comfyui_windows_portable\comfyui\venv\lib\site-packages (from llama-cpp-python) (4.12.2) Requirement already satisfied: numpy>=1.20.0 in e:\stable-diffusion-webui\extensions\comfyui_windows_portable\comfyui\venv\lib\site-packages (from llama-cpp-python) (1.26.4) Collecting diskcache>=5.6.1 (from llama-cpp-python) Using cached diskcache-5.6.3-py3-none-any.whl.metadata (20 kB) Requirement already satisfied: jinja2>=2.11.3 in e:\stable-diffusion-webui\extensions\comfyui_windows_portable\comfyui\venv\lib\site-packages (from llama-cpp-python) (3.1.4) Requirement already satisfied: MarkupSafe>=2.0 in e:\stable-diffusion-webui\extensions\comfyui_windows_portable\comfyui\venv\lib\site-packages (from jinja2>=2.11.3->llama-cpp-python) (2.1.5) Using cached diskcache-5.6.3-py3-none-any.whl (45 kB) Building wheels for collected packages: llama-cpp-python Building wheel for llama-cpp-python (pyproject.toml) ... error error: subprocess-exited-with-error

× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [20 lines of output] scikit-build-core 0.9.6 using CMake 3.28.1 (wheel) Configuring CMake... 2024-06-20 11:30:31,404 - scikit_build_core - WARNING - Can't find a Python library, got libdir=None, ldlibrary=None, multiarch=None, masd=None loading initial cache file C:\Users\mfker\AppData\Local\Temp\tmphjhyhtin\build\CMakeInit.txt -- Building for: NMake Makefiles CMake Error at CMakeLists.txt:3 (project): Running

     'nmake' '-?'

    failed with:

     no such file or directory

  CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage
  CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage
  -- Configuring incomplete, errors occurred!

  *** CMake configuration failed
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for llama-cpp-python Failed to build llama-cpp-python ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects ` some error with the pip install. thinks your help!

flynet commented 1 month ago

Have same issue, 1 year already, anyone figured out ?

So I solved by installing earlier version

jtoump commented 2 weeks ago

Have same issue, 1 year already, anyone figured out ?

So I solved by installing earlier version

Never thought of trying an older version, it worked for me too. Installed the v.0.2.70 . The changes should be investigated, maybe a bug on the requirements?

sebastianczech commented 1 week ago

Have same issue, 1 year already, anyone figured out ? So I solved by installing earlier version

Never thought of trying an older version, it worked for me too. Installed the v.0.2.70 . The changes should be investigated, maybe a bug on the requirements?

I tried on older versions but without progress:

> pip install llama-cpp-python==0.2.70 --no-cache-dir
Collecting llama-cpp-python==0.2.70
  Downloading llama_cpp_python-0.2.70.tar.gz (46.4 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 46.4/46.4 MB 11.6 MB/s eta 0:00:00
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Installing backend dependencies ... done
  Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: typing-extensions>=4.5.0 in ./venv/lib/python3.12/site-packages (from llama-cpp-python==0.2.70) (4.12.2)
Requirement already satisfied: numpy>=1.20.0 in ./venv/lib/python3.12/site-packages (from llama-cpp-python==0.2.70) (2.1.2)
Collecting diskcache>=5.6.1 (from llama-cpp-python==0.2.70)
  Downloading diskcache-5.6.3-py3-none-any.whl.metadata (20 kB)
Requirement already satisfied: jinja2>=2.11.3 in ./venv/lib/python3.12/site-packages (from llama-cpp-python==0.2.70) (3.1.4)
Requirement already satisfied: MarkupSafe>=2.0 in ./venv/lib/python3.12/site-packages (from jinja2>=2.11.3->llama-cpp-python==0.2.70) (3.0.2)
Downloading diskcache-5.6.3-py3-none-any.whl (45 kB)
Building wheels for collected packages: llama-cpp-python
  Building wheel for llama-cpp-python (pyproject.toml) ... error
  error: subprocess-exited-with-error

  × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [78 lines of output]
      *** scikit-build-core 0.10.7 using CMake 3.30.5 (wheel)
      *** Configuring CMake...
      loading initial cache file /var/folders/6q/dn566lyd20z9x1phnzlwgmlc0000gn/T/tmpnay77ovj/build/CMakeInit.txt
      -- The C compiler identification is AppleClang 16.0.0.16000026
      -- The CXX compiler identification is AppleClang 16.0.0.16000026
      -- Detecting C compiler ABI info
      -- Detecting C compiler ABI info - done
      -- Check for working C compiler: /Library/Developer/CommandLineTools/usr/bin/gcc - skipped
      -- Detecting C compile features
      -- Detecting C compile features - done
      -- Detecting CXX compiler ABI info
      -- Detecting CXX compiler ABI info - done
      -- Check for working CXX compiler: /Library/Developer/CommandLineTools/usr/bin/g++ - skipped
      -- Detecting CXX compile features
      -- Detecting CXX compile features - done
      -- Found Git: /usr/bin/git (found version "2.39.5 (Apple Git-154)")
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
      -- Found Threads: TRUE
      -- Accelerate framework found
      -- Metal framework found
      -- The ASM compiler identification is AppleClang
      -- Found assembler: /Library/Developer/CommandLineTools/usr/bin/gcc
      -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF
      -- CMAKE_SYSTEM_PROCESSOR: arm64
      -- ARM detected
      -- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E
      -- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E - Failed
      CMake Warning (dev) at vendor/llama.cpp/CMakeLists.txt:1270 (install):
        Target llama has RESOURCE files but no RESOURCE DESTINATION.
      This warning is for project developers.  Use -Wno-dev to suppress it.

      CMake Warning (dev) at CMakeLists.txt:26 (install):
        Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
      This warning is for project developers.  Use -Wno-dev to suppress it.

      CMake Warning (dev) at CMakeLists.txt:35 (install):
        Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
      This warning is for project developers.  Use -Wno-dev to suppress it.

      -- Configuring done (0.6s)
      -- Generating done (0.0s)
      -- Build files have been written to: /var/folders/6q/dn566lyd20z9x1phnzlwgmlc0000gn/T/tmpnay77ovj/build
      *** Building project with Ninja...
      Change Dir: '/var/folders/6q/dn566lyd20z9x1phnzlwgmlc0000gn/T/tmpnay77ovj/build'

      Run Build Command(s): /private/var/folders/6q/dn566lyd20z9x1phnzlwgmlc0000gn/T/pip-build-env-1kbdcd09/normal/lib/python3.12/site-packages/ninja/data/bin/ninja -v
......
......
......
      [11/30] /Library/Developer/CommandLineTools/usr/bin/gcc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_METAL_EMBED_LIBRARY -DGGML_SCHED_MAX_COPIES=4 -DGGML_USE_ACCELERATE -DGGML_USE_LLAMAFILE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/6q/dn566lyd20z9x1phnzlwgmlc0000gn/T/pip-install-9dh6g3in/llama-cpp-python_b06b60e65d3c4c63bcc9b3acbf8e5abc/vendor/llama.cpp/. -F/Library/Developer/CommandLineTools/SDKs/MacOSX15.0.sdk/System/Library/Frameworks -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX15.0.sdk -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -c /private/var/folders/6q/dn566lyd20z9x1phnzlwgmlc0000gn/T/pip-install-9dh6g3in/llama-cpp-python_b06b60e65d3c4c63bcc9b3acbf8e5abc/vendor/llama.cpp/ggml.c
      ninja: build stopped: subcommand failed.

      *** CMake build failed
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: ERROR: Failed to build in

I'm doing it on Mac M1 with Python 3.12.7