abetlen / llama-cpp-python

Python bindings for llama.cpp
https://llama-cpp-python.readthedocs.io
MIT License
8.01k stars 950 forks source link

The same as #714 . Can't install llama-cpp-python -libpython3.11.a file not found during building wheel #791

Open terfani opened 1 year ago

terfani commented 1 year ago

I went through all the one on other threads but still get the same error both on installing llama.cpp and llama-cpp-python. I also went through the developer installation also as suggested and also tried remove the verbatim related to Apple as I use macOS (not the M1 but the intel one) but still the same error.

I tried to do the llama.cpp also as suggested on #714 and I get the error: use of undeclared identifier errors?

IgorBeHolder commented 1 year ago

The same problem on macOs (Intel 2019) Tried to install as 'simple' routine.

strelkon commented 1 year ago

Somehow this helped me - I have an Intel-based MacBook 2018. At least it has installed correctly.

CMAKE_ARGS="-DLLAMA_METAL=off -DLLAMA_CLBLAST=on" FORCE_CMAKE=1 pip install -U git+https://github.com/abetlen/llama-cpp-python.git --no-cache-dir

IgorBeHolder commented 1 year ago

@strelkon Thanks for a try! Your snippet fulfilled successfully, but make docker ends with error:

Click to toggle! 95.38 Building wheels for collected packages: llama_cpp_python, paginate 95.38 Building editable for llama_cpp_python (pyproject.toml): started 96.21 Building editable for llama_cpp_python (pyproject.toml): finished with status 'error' 96.22 error: subprocess-exited-with-error 96.22 96.22 × Building editable for llama_cpp_python (pyproject.toml) did not run successfully. 96.22 │ exit code: 1 96.22 ╰─> [48 lines of output] 96.22 *** scikit-build-core 0.5.1 using CMake 3.27.6 (editable) 96.22 *** Configuring CMake... 96.22 loading initial cache file /tmp/tmp84zfr8qa/build/CMakeInit.txt 96.22 -- The C compiler identification is GNU 10.2.1 96.22 -- The CXX compiler identification is GNU 10.2.1 96.22 -- Detecting C compiler ABI info 96.22 -- Detecting C compiler ABI info - done 96.22 -- Check for working C compiler: /usr/bin/cc - skipped 96.22 -- Detecting C compile features 96.22 -- Detecting C compile features - done 96.22 -- Detecting CXX compiler ABI info 96.22 -- Detecting CXX compiler ABI info - done 96.22 -- Check for working CXX compiler: /usr/bin/c++ - skipped 96.22 -- Detecting CXX compile features 96.22 -- Detecting CXX compile features - done 96.22 -- Could NOT find Git (missing: GIT_EXECUTABLE) 96.22 CMake Warning at vendor/llama.cpp/scripts/build-info.cmake:16 (message): 96.22 Git not found. Build info will not be accurate. 96.22 Call Stack (most recent call first): 96.22 vendor/llama.cpp/CMakeLists.txt:108 (include) 96.22 96.22 96.22 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD 96.22 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed 96.22 -- Check if compiler accepts -pthread 96.22 -- Check if compiler accepts -pthread - yes 96.22 -- Found Threads: TRUE 96.22 -- CMAKE_SYSTEM_PROCESSOR: x86_64 96.22 -- x86 detected 96.22 CMake Warning (dev) at CMakeLists.txt:18 (install): 96.22 Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION. 96.22 This warning is for project developers. Use -Wno-dev to suppress it. 96.22 96.22 CMake Warning (dev) at CMakeLists.txt:27 (install): 96.22 Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION. 96.22 This warning is for project developers. Use -Wno-dev to suppress it. 96.22 96.22 -- Configuring done (0.6s) 96.22 -- Generating done (0.0s) 96.22 -- Build files have been written to: /tmp/tmp84zfr8qa/build 96.22 *** Building project with Ninja... 96.22 Change Dir: '/tmp/tmp84zfr8qa/build' 96.22 96.22 Run Build Command(s): /usr/bin/ninja -v 96.22 ninja: error: '/.git/modules/llama-cpp-python/modules/vendor/llama.cpp/index', needed by '/app/vendor/llama.cpp/build-info.h', missing and no known rule to make it 96.22 96.22 96.22 *** CMake build failed 96.22 [end of output] 96.22 96.22 note: This error originates from a subprocess, and is likely not a problem with pip. 96.22 ERROR: Failed building editable for llama_cpp_python 96.22 Building wheel for paginate (setup.py): started 96.49 Building wheel for paginate (setup.py): finished with status 'done' 96.49 Created wheel for paginate: filename=paginate-0.5.6-py3-none-any.whl size=12666 sha256=c52f2fd3211c48e5935109f78a3ccdf1eeba23e5714f4192e822c67e81f6aca3 96.49 Stored in directory: /root/.cache/pip/wheels/03/20/4e/4925d1027f4b377bef23999a1a5eaa438339b741a6a2f3ad39 96.49 Successfully built paginate 96.49 Failed to build llama_cpp_python 96.49 ERROR: Could not build wheels for llama_cpp_python, which is required to install pyproject.toml-based projects 97.02 make: *** [Makefile:10: deps] Error 1 ------ Dockerfile:24 -------------------- 22 | RUN python3 -m pip install --upgrade pip 23 | 24 | >>> RUN make deps && make build && make clean 25 | 26 | # Set environment variable for the host -------------------- ERROR: failed to solve: process "/bin/sh -c make deps && make build && make clean" did not complete successfully: exit code: 2 make: *** [docker] Error 1
terfani commented 1 year ago

CMAKE_ARGS="-DLLAMA_METAL=off -DLLAMA_CLBLAST=on" FORCE_CMAKE=1 pip install -U git+https://github.com/abetlen/llama-cpp-python.git --no-cache-dir

This worked for me also.

VArdulov commented 11 months ago

I'm not 100% what caused the issue, but I was running into problems on my 2018 MacBook Pro (error was not repeatable on my Ubuntu laptop, which installed llama-cpp-python without issue). Updating and making sure I had the latest version of XCode (v15.0.1) on my laptop, which also required me to update my macOS to 10.13, solved this issue for me and I was able to both cmake the original llama.cpp package, and then simply run pip install llama-cpp-python as well.

My recommendation to debug this issue is try running the cmake commands for llama-cpp:

git clone git@github.com:ggerganov/llama.cpp.git
cd llama.cpp
mkdir build
cd build
cmake ..
# if previous step does not produce errors
cmake --build . --config Release

This will tell you if it's an issue generally with installing llama-cpp or if it's an issue with the python package and will help us debug next steps.

Last quick note: When I updated macOS and subsequently XCode, I initially tried to just run the pip install command, which still failed. Running the the aforementioned commands required me to explicitly accept the xcode license as admin, and after taking all of those steps the pip install command ran smoothly for me.

Hope this helps