abetlen / llama-cpp-python

Python bindings for llama.cpp
https://llama-cpp-python.readthedocs.io
MIT License
7.71k stars 928 forks source link

can not install llama-cpp-python #1621

Open XingchenMengxiang opened 1 month ago

XingchenMengxiang commented 1 month ago

Prerequisites

pip install llama-cpp-python --verbose

Environment and Context

$ python3 --version
Python 3.12.3
$ make --version
GNU Make 3.82
$ g++ --version
gcc (GCC) 11.2.0

Failure Logs

[11/27] /mnt/petrelfs/share/gcc/gcc-11.2.0/bin/g++ -pthread -B /mnt/petrelfs/yaoshuo/miniconda3/compiler_compat -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/common/. -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/src/. -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/src/../include -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/train.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/train.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/train.cpp.o -c /tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/common/train.cpp [12/27] /mnt/petrelfs/share/gcc/gcc-11.2.0/bin/g++ -pthread -B /mnt/petrelfs/yaoshuo/miniconda3/compiler_compat -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/common/. -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/src/. -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/src/../include -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/ngram-cache.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/ngram-cache.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/ngram-cache.cpp.o -c /tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/common/ngram-cache.cpp [13/27] /mnt/petrelfs/share/gcc/gcc-11.2.0/bin/g++ -pthread -B /mnt/petrelfs/yaoshuo/miniconda3/compiler_compat -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/common/. -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/src/. -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/src/../include -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/examples/llava/. -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/examples/llava/../.. -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/examples/llava/../../common -O3 -DNDEBUG -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llama-llava-cli.dir/llava-cli.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llama-llava-cli.dir/llava-cli.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llama-llava-cli.dir/llava-cli.cpp.o -c /tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/examples/llava/llava-cli.cpp [14/27] /mnt/petrelfs/share/gcc/gcc-11.2.0/bin/gcc -pthread -B /mnt/petrelfs/yaoshuo/miniconda3/compiler_compat -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_EXPORTS -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/ggml/src/. -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -march=native -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-quants.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-quants.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-quants.c.o -c /tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/ggml/src/ggml-quants.c FAILED: vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-quants.c.o /mnt/petrelfs/share/gcc/gcc-11.2.0/bin/gcc -pthread -B /mnt/petrelfs/yaoshuo/miniconda3/compiler_compat -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_EXPORTS -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/ggml/src/. -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -march=native -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-quants.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-quants.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-quants.c.o -c /tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/ggml/src/ggml-quants.c /tmp/ccHC5BsH.s: Assembler messages: /tmp/ccHC5BsH.s:32105: Error: no such instruction: vpdpbusd %ymm0,%ymm4,%ymm2' /tmp/ccHC5BsH.s:32161: Error: no such instruction:vpdpbusd 4(%r9),%ymm0,%ymm2' /tmp/ccHC5BsH.s:32227: Error: no such instruction: vpdpbusd %ymm1,%ymm4,%ymm0' /tmp/ccHC5BsH.s:32295: Error: no such instruction:vpdpbusd 4(%r9),%ymm0,%ymm1' /tmp/ccHC5BsH.s:32346: Error: no such instruction: `vpdpbusd %ymm2,%ymm4,%ymm1' [15/27] /mnt/petrelfs/share/gcc/gcc-11.2.0/bin/gcc -pthread -B /mnt/petrelfs/yaoshuo/miniconda3/compiler_compat -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_EXPORTS -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/ggml/src/. -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -march=native -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml.c.o -c /tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/ggml/src/ggml.c [16/27] /mnt/petrelfs/share/gcc/gcc-11.2.0/bin/g++ -pthread -B /mnt/petrelfs/yaoshuo/miniconda3/compiler_compat -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/src/. -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/src/../include -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode.cpp.o -c /tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/src/unicode.cpp [17/27] /mnt/petrelfs/share/gcc/gcc-11.2.0/bin/g++ -pthread -B /mnt/petrelfs/yaoshuo/miniconda3/compiler_compat -DLLAMA_BUILD -DLLAMA_SHARED -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/examples/llava/. -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/examples/llava/../.. -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/examples/llava/../../common -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/src/. -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/src/../include -O3 -DNDEBUG -fPIC -Wno-cast-qual -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -c /tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/examples/llava/clip.cpp [18/27] /mnt/petrelfs/share/gcc/gcc-11.2.0/bin/g++ -pthread -B /mnt/petrelfs/yaoshuo/miniconda3/compiler_compat -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/common/. -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/src/. -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/src/../include -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/json-schema-to-grammar.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/json-schema-to-grammar.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/json-schema-to-grammar.cpp.o -c /tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/common/json-schema-to-grammar.cpp [19/27] /mnt/petrelfs/share/gcc/gcc-11.2.0/bin/g++ -pthread -B /mnt/petrelfs/yaoshuo/miniconda3/compiler_compat -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/common/. -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/src/. -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/src/../include -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o -c /tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/common/common.cpp [20/27] /mnt/petrelfs/share/gcc/gcc-11.2.0/bin/g++ -pthread -B /mnt/petrelfs/yaoshuo/miniconda3/compiler_compat -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/src/. -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/src/../include -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode-data.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode-data.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode-data.cpp.o -c /tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/src/unicode-data.cpp [21/27] /mnt/petrelfs/share/gcc/gcc-11.2.0/bin/g++ -pthread -B /mnt/petrelfs/yaoshuo/miniconda3/compiler_compat -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/src/. -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/src/../include -I/tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama.cpp.o -c /tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c/vendor/llama.cpp/src/llama.cpp ninja: build stopped: subcommand failed.

*** CMake build failed error: subprocess-exited-with-error

× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip. full command: /mnt/petrelfs/yaoshuo/miniconda3/bin/python /mnt/petrelfs/yaoshuo/miniconda3/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py build_wheel /tmp/tmpv0ch2zhp cwd: /tmp/pip-install-4ml_mygh/llama-cpp-python_051c2755478e4ca59d901bbc1e18867c Building wheel for llama-cpp-python (pyproject.toml) ... error ERROR: Failed building wheel for llama-cpp-python Failed to build llama-cpp-python ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

littlebai3618 commented 1 month ago

I encountered exactly the same issue, and the installation command is as follows:

CMAKE_ARGS="-DLLAMA_CUDA=on -DCMAKE_BUILD_TYPE=Debug -DLLAMA_CUDA_DMMV_X=256 -DLLAMA_CUDA_MMV_Y=32" FORCE_CMAKE=1 pip install llama-cpp-python==0.2.88 --no-cache-dir --force-reinstall --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple

env:

(server) root@aistudio-75251-prod-0:~# python3 --version
Python 3.11.5
(server) root@aistudio-75251-prod-0:~# make --version
GNU Make 4.2.1
Built for x86_64-pc-linux-gnu
Copyright (C) 1988-2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
(server) root@aistudio-75251-prod-0:~# g++ --version
g++ (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.