OpenCSGs / llm-inference

llm-inference is a platform for publishing and managing llm inference, providing a wide range of out-of-the-box features for model deployment, such as UI, RESTful API, auto-scaling, computing resource management, monitoring, and more.
Apache License 2.0
66 stars 17 forks source link

Install dependency llama-cpp-python failed #59

Open SeanHH86 opened 5 months ago

SeanHH86 commented 5 months ago
Using cached exceptiongroup-1.2.0-py3-none-any.whl (16 kB)
Building wheels for collected packages: deepspeed, llama-cpp-python, llm-serve, ffmpy
  Building wheel for deepspeed (setup.py) ... done
  Created wheel for deepspeed: filename=deepspeed-0.14.0-py3-none-any.whl size=1400347 sha256=db3cabb92e930a4d76b2adf48e2bae802dc28c333d54d790ab2b4256efe03fe0
  Stored in directory: /Users/hhwang/Library/Caches/pip/wheels/23/96/24/bab20c3b4e2af15e195b339afaec373eca7072cf90620432e5
  Building wheel for llama-cpp-python (pyproject.toml) ... error
  error: subprocess-exited-with-error

  × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [66 lines of output]
      *** scikit-build-core 0.8.2 using CMake 3.29.0 (wheel)
      *** Configuring CMake...
      2024-03-31 14:09:18,364 - scikit_build_core - WARNING - libdir/ldlibrary: /Users/hhwang/anaconda3/envs/abc/lib/libpython3.10.a is not a real file!
      2024-03-31 14:09:18,364 - scikit_build_core - WARNING - Can't find a Python library, got libdir=/Users/hhwang/anaconda3/envs/abc/lib, ldlibrary=libpython3.10.a, multiarch=darwin, masd=None
      loading initial cache file /var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/tmpujapt1jr/build/CMakeInit.txt
      -- The C compiler identification is AppleClang 15.0.0.15000309
      -- The CXX compiler identification is AppleClang 15.0.0.15000309
      -- Detecting C compiler ABI info
      -- Detecting C compiler ABI info - done
      -- Check for working C compiler: /Library/Developer/CommandLineTools/usr/bin/cc - skipped
      -- Detecting C compile features
      -- Detecting C compile features - done
      -- Detecting CXX compiler ABI info
      -- Detecting CXX compiler ABI info - done
      -- Check for working CXX compiler: /Library/Developer/CommandLineTools/usr/bin/c++ - skipped
      -- Detecting CXX compile features
      -- Detecting CXX compile features - done
      -- Found Git: /usr/bin/git (found version "2.39.3 (Apple Git-146)")
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
      -- Found Threads: TRUE
      -- Accelerate framework found
      -- Metal framework found
      -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF
      -- CMAKE_SYSTEM_PROCESSOR: arm64
      -- ARM detected
      -- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E
      -- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E - Failed
      CMake Warning (dev) at vendor/llama.cpp/CMakeLists.txt:1218 (install):
        Target llama has RESOURCE files but no RESOURCE DESTINATION.
      This warning is for project developers.  Use -Wno-dev to suppress it.

      CMake Warning (dev) at CMakeLists.txt:21 (install):
        Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
      This warning is for project developers.  Use -Wno-dev to suppress it.

      CMake Warning (dev) at CMakeLists.txt:30 (install):
        Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
      This warning is for project developers.  Use -Wno-dev to suppress it.

      -- Configuring done (0.5s)
      -- Generating done (0.0s)
      -- Build files have been written to: /var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/tmpujapt1jr/build
      *** Building project with Ninja...
      Change Dir: '/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/tmpujapt1jr/build'

      Run Build Command(s): /private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-build-env-h3q63wii/normal/lib/python3.10/site-packages/ninja/data/bin/ninja -v
      [1/25] cd /var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/tmpujapt1jr/build/vendor/llama.cpp && xcrun -sdk macosx metal -O3 -c /var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/tmpujapt1jr/build/bin/ggml-metal.metal -o /var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/tmpujapt1jr/build/bin/ggml-metal.air && xcrun -sdk macosx metallib /var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/tmpujapt1jr/build/bin/ggml-metal.air -o /var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/tmpujapt1jr/build/bin/default.metallib && rm -f /var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/tmpujapt1jr/build/bin/ggml-metal.air && rm -f /var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/tmpujapt1jr/build/bin/ggml-common.h && rm -f /var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/tmpujapt1jr/build/bin/ggml-metal.metal
      FAILED: bin/default.metallib /var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/tmpujapt1jr/build/bin/default.metallib
      cd /var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/tmpujapt1jr/build/vendor/llama.cpp && xcrun -sdk macosx metal -O3 -c /var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/tmpujapt1jr/build/bin/ggml-metal.metal -o /var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/tmpujapt1jr/build/bin/ggml-metal.air && xcrun -sdk macosx metallib /var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/tmpujapt1jr/build/bin/ggml-metal.air -o /var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/tmpujapt1jr/build/bin/default.metallib && rm -f /var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/tmpujapt1jr/build/bin/ggml-metal.air && rm -f /var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/tmpujapt1jr/build/bin/ggml-common.h && rm -f /var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/tmpujapt1jr/build/bin/ggml-metal.metal
      xcrun: error: unable to find utility "metal", not a developer tool or in PATH
      [2/25] cd /private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp && /private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-build-env-h3q63wii/normal/lib/python3.10/site-packages/cmake/data/bin/cmake -DMSVC= -DCMAKE_C_COMPILER_VERSION=15.0.0.15000309 -DCMAKE_C_COMPILER_ID=AppleClang -DCMAKE_VS_PLATFORM_NAME= -DCMAKE_C_COMPILER=/Library/Developer/CommandLineTools/usr/bin/cc -P /private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/common/../scripts/gen-build-info-cpp.cmake
      -- Found Git: /usr/bin/git (found version "2.39.3 (Apple Git-146)")
      [3/25] /Library/Developer/CommandLineTools/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_SCHED_MAX_COPIES=4 -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/. -F/Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk/System/Library/Frameworks -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk -mmacosx-version-min=14.3 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -c /private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/ggml-alloc.c
      [4/25] /Library/Developer/CommandLineTools/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_SCHED_MAX_COPIES=4 -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/. -F/Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk/System/Library/Frameworks -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk -mmacosx-version-min=14.3 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-backend.c.o -c /private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/ggml-backend.c
      [5/25] /Library/Developer/CommandLineTools/usr/bin/c++ -DGGML_USE_METAL -DLLAMA_BUILD -DLLAMA_SHARED -I/private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/examples/llava/. -I/private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/examples/llava/../.. -I/private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/examples/llava/../../common -I/private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu++11 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk -mmacosx-version-min=14.3 -fPIC -Wno-cast-qual -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o -c /private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/examples/llava/llava.cpp
      [6/25] /Library/Developer/CommandLineTools/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_SCHED_MAX_COPIES=4 -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/. -F/Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk/System/Library/Frameworks -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk -mmacosx-version-min=14.3 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o -c /private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/ggml-metal.m
      [7/25] /Library/Developer/CommandLineTools/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_SCHED_MAX_COPIES=4 -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/. -F/Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk/System/Library/Frameworks -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk -mmacosx-version-min=14.3 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-quants.c.o -c /private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/ggml-quants.c
      [8/25] /Library/Developer/CommandLineTools/usr/bin/c++ -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_SCHED_MAX_COPIES=4 -DGGML_USE_ACCELERATE -DGGML_USE_METAL -DLLAMA_BUILD -DLLAMA_SHARED -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -Dllama_EXPORTS -I/private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/. -F/Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk/System/Library/Frameworks -O3 -DNDEBUG -std=gnu++11 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk -mmacosx-version-min=14.3 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -MD -MT vendor/llama.cpp/CMakeFiles/llama.dir/unicode.cpp.o -MF vendor/llama.cpp/CMakeFiles/llama.dir/unicode.cpp.o.d -o vendor/llama.cpp/CMakeFiles/llama.dir/unicode.cpp.o -c /private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/unicode.cpp
      [9/25] /Library/Developer/CommandLineTools/usr/bin/c++ -DGGML_USE_METAL -DLLAMA_BUILD -DLLAMA_SHARED -I/private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/examples/llava/. -I/private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/examples/llava/../.. -I/private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/examples/llava/../../common -I/private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/. -O3 -DNDEBUG -std=gnu++11 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk -mmacosx-version-min=14.3 -fPIC -Wno-cast-qual -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -c /private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/examples/llava/clip.cpp
      [10/25] /Library/Developer/CommandLineTools/usr/bin/cc -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_SCHED_MAX_COPIES=4 -DGGML_USE_ACCELERATE -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/. -F/Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk/System/Library/Frameworks -O3 -DNDEBUG -std=gnu11 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk -mmacosx-version-min=14.3 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -c /private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/ggml.c
      [11/25] /Library/Developer/CommandLineTools/usr/bin/c++ -DACCELERATE_LAPACK_ILP64 -DACCELERATE_NEW_LAPACK -DGGML_SCHED_MAX_COPIES=4 -DGGML_USE_ACCELERATE -DGGML_USE_METAL -DLLAMA_BUILD -DLLAMA_SHARED -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -Dllama_EXPORTS -I/private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/. -F/Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk/System/Library/Frameworks -O3 -DNDEBUG -std=gnu++11 -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX14.4.sdk -mmacosx-version-min=14.3 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -MD -MT vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -MF vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o.d -o vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -c /private/var/folders/wm/9mckczj143jdzdzr037_g8lm0000gn/T/pip-install-f3axqn06/llama-cpp-python_20b9c0a542fd49ac96cea9fb409a8d94/vendor/llama.cpp/llama.cpp
      ninja: build stopped: subcommand failed.

      *** CMake build failed
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for llama-cpp-python
  Building wheel for llm-serve (pyproject.toml) ... done
  Created wheel for llm-serve: filename=llm_serve-0.0.1-py3-none-any.whl size=100808 sha256=5896e4e7b35cf15f8977a5847a9ff40f78ed2ae42e95adc28def70cefc2b426c
  Stored in directory: /Users/hhwang/Library/Caches/pip/wheels/cb/6e/71/619b3e1f616ba182cb9bfc8e0e239a9e8402f4305bc75d27d7
  Building wheel for ffmpy (setup.py) ... done
  Created wheel for ffmpy: filename=ffmpy-0.3.2-py3-none-any.whl size=5582 sha256=f2f3304e01d27a1e9f63c8c504d5d56cf0a5c40ec98c2e805c1a5d8c41ea17be
  Stored in directory: /Users/hhwang/Library/Caches/pip/wheels/bd/65/9a/671fc6dcde07d4418df0c592f8df512b26d7a0029c2a23dd81
Successfully built deepspeed llm-serve ffmpy
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
SeanHH86 commented 5 months ago

Failed to install llama_cpp_python==0.2.57 on MAC, but llama_cpp_python==0.2.56 success.

depenglee1707 commented 5 months ago

@SeanHH86 , that's weird, I can install succeed

SeanHH86 commented 5 months ago

$ python -V Python 3.10.14

$ gcc -v Apple clang version 15.0.0 (clang-1500.3.9.4) Target: arm64-apple-darwin23.3.0 Thread model: posix InstalledDir: /Library/Developer/CommandLineTools/usr/bin

SeanHH86 commented 4 months ago

Fail to install on Ubuntu20.04 + Python 3.10.14 with same issue.