TheBlewish / Automated-AI-Web-Researcher-Ollama

A python program that turns an LLM, running on Ollama, into an automated researcher, which will with a single query determine focus areas to investigate, do websearches and scrape content from various relevant websites and do research for you all on its own! And more, not limited to but including saving the findings for you!
MIT License
1.04k stars 102 forks source link

pip install fails on building llama-cpp-python wheel #16

Open Karthik-Dulam opened 1 day ago

Karthik-Dulam commented 1 day ago

OS: 22.04.1-Ubuntu Python: Python 3.12.2

Build fails for llama-cpp-python

$ pip install -r requirements.txt 
...
Building wheels for collected packages: llama-cpp-python 
  Building wheel for llama-cpp-python (pyproject.toml) ... error 
  error: subprocess-exited-with-error 
  × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. 
  │ exit code: 1 
  ╰─> [128 lines of output] 
      *** scikit-build-core 0.10.7 using CMake 3.31.0 (wheel) 
      *** Configuring CMake... 
      loading initial cache file /tmp/tmpns9ajbe0/build/CMakeInit.txt 
      -- The C compiler identification is GNU 11.4.0 
      -- The CXX compiler identification is GNU 11.4.0 
      -- Detecting C compiler ABI info 
      -- Detecting C compiler ABI info - done 
      -- Check for working C compiler: /usr/bin/gcc - skipped 
      -- Detecting C compile features 
      -- Detecting C compile features - done 
      -- Detecting CXX compiler ABI info 
      -- Detecting CXX compiler ABI info - done 
      -- Check for working CXX compiler: /usr/bin/g++ - skipped 
      -- Detecting CXX compile features 
      -- Detecting CXX compile features - done 
      -- Found Git: /usr/bin/git (found version "2.34.1") 
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD 
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success 
      -- Found Threads: TRUE 
      -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF 
      -- CMAKE_SYSTEM_PROCESSOR: x86_64 
      -- Found OpenMP_C: -fopenmp (found version "4.5") 
      -- Found OpenMP_CXX: -fopenmp (found version "4.5") 
      -- Found OpenMP: TRUE (found version "4.5") 
      -- OpenMP found 
      -- Using llamafile 
      -- x86 detected 
      -- Using runtime weight conversion of Q4_0 to Q4_0_x_x to enable optimized GEMM/GEMV kernels 
      -- Including CPU backend 
      -- Using AMX 
      -- Including AMX backend 
      CMake Warning (dev) at CMakeLists.txt:13 (install): 
        Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION. 
      Call Stack (most recent call first): 
        CMakeLists.txt:80 (llama_cpp_python_install_target) 
      This warning is for project developers.  Use -Wno-dev to suppress it. 
      CMake Warning (dev) at CMakeLists.txt:21 (install): 
        Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION. 
      Call Stack (most recent call first): 
        CMakeLists.txt:80 (llama_cpp_python_install_target) 
      This warning is for project developers.  Use -Wno-dev to suppress it. 
      CMake Warning (dev) at CMakeLists.txt:13 (install): 
        Target ggml has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION. 
      Call Stack (most recent call first): 
        CMakeLists.txt:81 (llama_cpp_python_install_target) 
      This warning is for project developers.  Use -Wno-dev to suppress it. 
      CMake Warning (dev) at CMakeLists.txt:21 (install): 
        Target ggml has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION. 
      Call Stack (most recent call first): 
        CMakeLists.txt:81 (llama_cpp_python_install_target) 
      This warning is for project developers.  Use -Wno-dev to suppress it. 
      -- Configuring done (1.3s) 
      -- Generating done (0.0s) 
      -- Build files have been written to: /tmp/tmpns9ajbe0/build 
      *** Building project with Ninja... 
      Change Dir: '/tmp/tmpns9ajbe0/build' 
      Run Build Command(s): /tmp/pip-build-env-73kgrlh6/normal/lib/python3.12/site-packages/ninja/data/bin/ninja -v 
      [1/43] cd /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp && /tmp/pip-build-env-73kgrlh6/normal/lib/python3.12/site-packages/cmake/data/bin/cmake -DMSVC= -DCMAKE_C_COMPILER_VERSION=11.4.0 -DCMAKE_C_COMPILER_ID=GNU -DCMAKE_VS_PLATFORM_NAME= -DCMAKE_C_COMPILER=/usr/bin/gcc -P /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/common/cmake/build-info-gen-cpp.cmake 
      -- Found Git: /usr/bin/git (found version "2.34.1") 
      [2/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat   -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/common/build-info.cpp 
      [3/43] /usr/bin/gcc  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-aarch64.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-aarch64.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-aarch64.c.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-aarch64.c 
      [4/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu++11 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-threading.cpp 
      [5/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_SHARED -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_AMX -DGGML_USE_CPU -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_EXPORTS -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu++11 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-backend-reg.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-backend-reg.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-backend-reg.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-backend-reg.cpp 
      [6/43] /usr/bin/gcc  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-alloc.c 
      [7/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_AMX -DGGML_USE_CPU -DLLAMA_SHARED -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/common/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/../include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/common/console.cpp 
      [8/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_amx_EXPORTS -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-amx/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-amx/.. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu++11 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -march=native -MD -MT vendor/llama.cpp/ggml/src/ggml-amx/CMakeFiles/ggml-amx.dir/ggml-amx.cpp.o -MF vendor/llama.cpp/ggml/src/ggml-amx/CMakeFiles/ggml-amx.dir/ggml-amx.cpp.o.d -o vendor/llama.cpp/ggml/src/ggml-amx/CMakeFiles/ggml-amx.dir/ggml-amx.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-amx/ggml-amx.cpp 
      [9/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_amx_EXPORTS -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-amx/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-amx/.. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu++11 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -march=native -MD -MT vendor/llama.cpp/ggml/src/ggml-amx/CMakeFiles/ggml-amx.dir/mmq.cpp.o -MF vendor/llama.cpp/ggml/src/ggml-amx/CMakeFiles/ggml-amx.dir/mmq.cpp.o.d -o vendor/llama.cpp/ggml/src/ggml-amx/CMakeFiles/ggml-amx.dir/mmq.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-amx/mmq.cpp 
      [10/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-cpu/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-cpu/.. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu++11 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -march=native -fopenmp -MD -MT vendor/llama.cpp/ggml/src/ggml-cpu/CMakeFiles/ggml-cpu.dir/ggml-cpu.cpp.o -MF vendor/llama.cpp/ggml/src/ggml-cpu/CMakeFiles/ggml-cpu.dir/ggml-cpu.cpp.o.d -o vendor/llama.cpp/ggml/src/ggml-cpu/CMakeFiles/ggml-cpu.dir/ggml-cpu.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp 
      [11/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_AMX -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/examples/llava/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/examples/llava/../.. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/examples/llava/../../common -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/../include -O3 -DNDEBUG -fPIC -Wno-cast-qual -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/examples/llava/llava.cpp 
      [12/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_AMX -DGGML_USE_CPU -DLLAMA_SHARED -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/common/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/../include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/log.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/log.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/log.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/common/log.cpp 
      [13/43] /usr/bin/gcc  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-cpu/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-cpu/.. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -march=native -fopenmp -MD -MT vendor/llama.cpp/ggml/src/ggml-cpu/CMakeFiles/ggml-cpu.dir/ggml-cpu-quants.c.o -MF vendor/llama.cpp/ggml/src/ggml-cpu/CMakeFiles/ggml-cpu.dir/ggml-cpu-quants.c.o.d -o vendor/llama.cpp/ggml/src/ggml-cpu/CMakeFiles/ggml-cpu.dir/ggml-cpu-quants.c.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-quants.c 
      [14/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu++11 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-backend.cpp 
      [15/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_AMX -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/../include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode-data.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode-data.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode-data.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/unicode-data.cpp 
      [16/43] /usr/bin/gcc  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-cpu/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-cpu/.. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -march=native -fopenmp -MD -MT vendor/llama.cpp/ggml/src/ggml-cpu/CMakeFiles/ggml-cpu.dir/ggml-cpu-aarch64.c.o -MF vendor/llama.cpp/ggml/src/ggml-cpu/CMakeFiles/ggml-cpu.dir/ggml-cpu-aarch64.c.o.d -o vendor/llama.cpp/ggml/src/ggml-cpu/CMakeFiles/ggml-cpu.dir/ggml-cpu-aarch64.c.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.c 
      [17/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_AMX -DGGML_USE_CPU -DLLAMA_SHARED -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/common/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/../include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/examples/llava/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/examples/llava/../.. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/examples/llava/../../common -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/include -O3 -DNDEBUG -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llama-llava-cli.dir/llava-cli.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llama-llava-cli.dir/llava-cli.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llama-llava-cli.dir/llava-cli.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/examples/llava/llava-cli.cpp 
      [18/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_AMX -DGGML_USE_CPU -DLLAMA_SHARED -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/common/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/../include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/examples/llava/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/examples/llava/../.. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/examples/llava/../../common -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/include -O3 -DNDEBUG -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llama-minicpmv-cli.dir/minicpmv-cli.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llama-minicpmv-cli.dir/minicpmv-cli.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llama-minicpmv-cli.dir/minicpmv-cli.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/examples/llava/minicpmv-cli.cpp 
      [19/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_AMX -DGGML_USE_CPU -DLLAMA_SHARED -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/common/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/../include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/ngram-cache.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/ngram-cache.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/ngram-cache.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/common/ngram-cache.cpp 
      [20/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_AMX -DGGML_USE_CPU -DLLAMA_SHARED -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/common/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/../include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/sampling.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/sampling.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/sampling.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/common/sampling.cpp 
      [21/43] /usr/bin/gcc  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml.c 
      [22/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_AMX -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/../include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-grammar.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-grammar.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-grammar.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/llama-grammar.cpp 
      [23/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_AMX -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/../include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-sampling.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-sampling.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-sampling.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/llama-sampling.cpp 
      [24/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_AMX -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/../include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-vocab.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-vocab.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-vocab.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/llama-vocab.cpp 
      [25/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-cpu/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-cpu/.. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu++11 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -march=native -fopenmp -MD -MT vendor/llama.cpp/ggml/src/ggml-cpu/CMakeFiles/ggml-cpu.dir/llamafile/sgemm.cpp.o -MF vendor/llama.cpp/ggml/src/ggml-cpu/CMakeFiles/ggml-cpu.dir/llamafile/sgemm.cpp.o.d -o vendor/llama.cpp/ggml/src/ggml-cpu/CMakeFiles/ggml-cpu.dir/llamafile/sgemm.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-cpu/llamafile/sgemm.cpp 
      [26/43] /usr/bin/gcc  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-cpu/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-cpu/.. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -march=native -fopenmp -MD -MT vendor/llama.cpp/ggml/src/ggml-cpu/CMakeFiles/ggml-cpu.dir/ggml-cpu.c.o -MF vendor/llama.cpp/ggml/src/ggml-cpu/CMakeFiles/ggml-cpu.dir/ggml-cpu.c.o.d -o vendor/llama.cpp/ggml/src/ggml-cpu/CMakeFiles/ggml-cpu.dir/ggml-cpu.c.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c 
      [27/43] /usr/bin/gcc  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/ggml-quants.c 
      [28/43] : && /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -fPIC -O3 -DNDEBUG   -shared -Wl,-soname,libggml-base.so -o vendor/llama.cpp/ggml/src/libggml-base.so vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-aarch64.c.o  -Wl,-rpath,"\$ORIGIN"  -lm && : 
      [29/43] : && /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -fPIC -O3 -DNDEBUG   -shared -Wl,-soname,libggml-amx.so -o vendor/llama.cpp/ggml/src/ggml-amx/libggml-amx.so vendor/llama.cpp/ggml/src/ggml-amx/CMakeFiles/ggml-amx.dir/ggml-amx.cpp.o vendor/llama.cpp/ggml/src/ggml-amx/CMakeFiles/ggml-amx.dir/mmq.cpp.o  -Wl,-rpath,"\$ORIGIN"  vendor/llama.cpp/ggml/src/libggml-base.so && : 
      [30/43] : && /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -fPIC -O3 -DNDEBUG   -shared -Wl,-soname,libggml-cpu.so -o vendor/llama.cpp/ggml/src/ggml-cpu/libggml-cpu.so vendor/llama.cpp/ggml/src/ggml-cpu/CMakeFiles/ggml-cpu.dir/ggml-cpu.c.o vendor/llama.cpp/ggml/src/ggml-cpu/CMakeFiles/ggml-cpu.dir/ggml-cpu.cpp.o vendor/llama.cpp/ggml/src/ggml-cpu/CMakeFiles/ggml-cpu.dir/ggml-cpu-aarch64.c.o vendor/llama.cpp/ggml/src/ggml-cpu/CMakeFiles/ggml-cpu.dir/ggml-cpu-quants.c.o vendor/llama.cpp/ggml/src/ggml-cpu/CMakeFiles/ggml-cpu.dir/llamafile/sgemm.cpp.o  -Wl,-rpath,"\$ORIGIN"  vendor/llama.cpp/ggml/src/libggml-base.so  /usr/lib/gcc/x86_64-linux-gnu/11/libgomp.so && : 
      [31/43] : && /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -fPIC -O3 -DNDEBUG   -shared -Wl,-soname,libggml.so -o vendor/llama.cpp/ggml/src/libggml.so vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-backend-reg.cpp.o  -Wl,-rpath,"\$ORIGIN"  vendor/llama.cpp/ggml/src/ggml-cpu/libggml-cpu.so  vendor/llama.cpp/ggml/src/ggml-amx/libggml-amx.so  vendor/llama.cpp/ggml/src/libggml-base.so && : 
      [32/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_AMX -DGGML_USE_CPU -DLLAMA_SHARED -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/common/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/../include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/common/common.cpp 
      [33/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_AMX -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/../include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/unicode.cpp 
      [34/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_AMX -DGGML_USE_CPU -DLLAMA_SHARED -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/common/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/../include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/json-schema-to-grammar.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/json-schema-to-grammar.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/json-schema-to-grammar.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/common/json-schema-to-grammar.cpp 
      [35/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_AMX -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/examples/llava/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/examples/llava/../.. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/examples/llava/../../common -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/../include -O3 -DNDEBUG -fPIC -Wno-cast-qual -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/examples/llava/clip.cpp 
      [36/43] : && /tmp/pip-build-env-73kgrlh6/normal/lib/python3.12/site-packages/cmake/data/bin/cmake -E rm -f vendor/llama.cpp/examples/llava/libllava_static.a && /usr/bin/ar qc vendor/llama.cpp/examples/llava/libllava_static.a  vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o && /usr/bin/ranlib vendor/llama.cpp/examples/llava/libllava_static.a && : 
      [37/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_AMX -DGGML_USE_CPU -DLLAMA_SHARED -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/common/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/../include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/arg.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/arg.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/arg.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/common/arg.cpp 
      [38/43] /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_AMX -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/. -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/../include -I/tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama.cpp.o -c /tmp/pip-install-oegh1a33/llama-cpp-python_050f75c93944400091e2f5925916ef80/vendor/llama.cpp/src/llama.cpp 
      [39/43] : && /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -fPIC -O3 -DNDEBUG   -shared -Wl,-soname,libllama.so -o vendor/llama.cpp/src/libllama.so vendor/llama.cpp/src/CMakeFiles/llama.dir/llama.cpp.o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-vocab.cpp.o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-grammar.cpp.o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-sampling.cpp.o vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode.cpp.o vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode-data.cpp.o  -Wl,-rpath,"\$ORIGIN"  vendor/llama.cpp/ggml/src/libggml.so  vendor/llama.cpp/ggml/src/ggml-cpu/libggml-cpu.so  vendor/llama.cpp/ggml/src/ggml-amx/libggml-amx.so  vendor/llama.cpp/ggml/src/libggml-base.so && : 
      [40/43] : && /tmp/pip-build-env-73kgrlh6/normal/lib/python3.12/site-packages/cmake/data/bin/cmake -E rm -f vendor/llama.cpp/common/libcommon.a && /usr/bin/ar qc vendor/llama.cpp/common/libcommon.a  vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/arg.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/json-schema-to-grammar.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/log.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/ngram-cache.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/sampling.cpp.o && /usr/bin/ranlib vendor/llama.cpp/common/libcommon.a && : 
      [41/43] : && /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -fPIC -O3 -DNDEBUG   -shared -Wl,-soname,libllava.so -o vendor/llama.cpp/examples/llava/libllava.so vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o  -Wl,-rpath,"\$ORIGIN"  vendor/llama.cpp/src/libllama.so  vendor/llama.cpp/ggml/src/libggml.so  vendor/llama.cpp/ggml/src/ggml-cpu/libggml-cpu.so  vendor/llama.cpp/ggml/src/ggml-amx/libggml-amx.so  vendor/llama.cpp/ggml/src/libggml-base.so && : 
      [42/43] : && /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -O3 -DNDEBUG  vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llama-llava-cli.dir/llava-cli.cpp.o -o vendor/llama.cpp/examples/llava/llama-llava-cli  -Wl,-rpath,/tmp/tmpns9ajbe0/build/vendor/llama.cpp/src:/tmp/tmpns9ajbe0/build/vendor/llama.cpp/ggml/src:/tmp/tmpns9ajbe0/build/vendor/llama.cpp/ggml/src/ggml-cpu:/tmp/tmpns9ajbe0/build/vendor/llama.cpp/ggml/src/ggml-amx:  vendor/llama.cpp/common/libcommon.a  vendor/llama.cpp/src/libllama.so  vendor/llama.cpp/ggml/src/libggml.so  vendor/llama.cpp/ggml/src/ggml-cpu/libggml-cpu.so  vendor/llama.cpp/ggml/src/ggml-amx/libggml-amx.so  vendor/llama.cpp/ggml/src/libggml-base.so && : 
      FAILED: vendor/llama.cpp/examples/llava/llama-llava-cli 
      : && /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -O3 -DNDEBUG  vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llama-llava-cli.dir/llava-cli.cpp.o -o vendor/llama.cpp/examples/llava/llama-llava-cli  -Wl,-rpath,/tmp/tmpns9ajbe0/build/vendor/llama.cpp/src:/tmp/tmpns9ajbe0/build/vendor/llama.cpp/ggml/src:/tmp/tmpns9ajbe0/build/vendor/llama.cpp/ggml/src/ggml-cpu:/tmp/tmpns9ajbe0/build/vendor/llama.cpp/ggml/src/ggml-amx:  vendor/llama.cpp/common/libcommon.a  vendor/llama.cpp/src/libllama.so  vendor/llama.cpp/ggml/src/libggml.so  vendor/llama.cpp/ggml/src/ggml-cpu/libggml-cpu.so  vendor/llama.cpp/ggml/src/ggml-amx/libggml-amx.so  vendor/llama.cpp/ggml/src/libggml-base.so && : 
      /home/kartik/miniconda3/compiler_compat/ld: warning: libgomp.so.1, needed by vendor/llama.cpp/ggml/src/ggml-cpu/libggml-cpu.so, not found (try using -rpath or -rpath-link) 
      /home/kartik/miniconda3/compiler_compat/ld: vendor/llama.cpp/ggml/src/ggml-cpu/libggml-cpu.so: undefined reference to `GOMP_barrier@GOMP_1.0' 
      /home/kartik/miniconda3/compiler_compat/ld: vendor/llama.cpp/ggml/src/ggml-cpu/libggml-cpu.so: undefined reference to `GOMP_parallel@GOMP_4.0' 
      /home/kartik/miniconda3/compiler_compat/ld: vendor/llama.cpp/ggml/src/ggml-cpu/libggml-cpu.so: undefined reference to `omp_get_thread_num@OMP_1.0' 
      /home/kartik/miniconda3/compiler_compat/ld: vendor/llama.cpp/ggml/src/ggml-cpu/libggml-cpu.so: undefined reference to `GOMP_single_start@GOMP_1.0' 
      /home/kartik/miniconda3/compiler_compat/ld: vendor/llama.cpp/ggml/src/ggml-cpu/libggml-cpu.so: undefined reference to `omp_get_num_threads@OMP_1.0' 
      collect2: error: ld returned 1 exit status 
      [43/43] : && /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -O3 -DNDEBUG  vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llama-minicpmv-cli.dir/minicpmv-cli.cpp.o -o vendor/llama.cpp/examples/llava/llama-minicpmv-cli  -Wl,-rpath,/tmp/tmpns9ajbe0/build/vendor/llama.cpp/src:/tmp/tmpns9ajbe0/build/vendor/llama.cpp/ggml/src:/tmp/tmpns9ajbe0/build/vendor/llama.cpp/ggml/src/ggml-cpu:/tmp/tmpns9ajbe0/build/vendor/llama.cpp/ggml/src/ggml-amx:  vendor/llama.cpp/common/libcommon.a  vendor/llama.cpp/src/libllama.so  vendor/llama.cpp/ggml/src/libggml.so  vendor/llama.cpp/ggml/src/ggml-cpu/libggml-cpu.so  vendor/llama.cpp/ggml/src/ggml-amx/libggml-amx.so  vendor/llama.cpp/ggml/src/libggml-base.so && : 
      FAILED: vendor/llama.cpp/examples/llava/llama-minicpmv-cli 
      : && /usr/bin/g++  -pthread -B /home/kartik/miniconda3/compiler_compat -O3 -DNDEBUG  vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llama-minicpmv-cli.dir/minicpmv-cli.cpp.o -o vendor/llama.cpp/examples/llava/llama-minicpmv-cli  -Wl,-rpath,/tmp/tmpns9ajbe0/build/vendor/llama.cpp/src:/tmp/tmpns9ajbe0/build/vendor/llama.cpp/ggml/src:/tmp/tmpns9ajbe0/build/vendor/llama.cpp/ggml/src/ggml-cpu:/tmp/tmpns9ajbe0/build/vendor/llama.cpp/ggml/src/ggml-amx:  vendor/llama.cpp/common/libcommon.a  vendor/llama.cpp/src/libllama.so  vendor/llama.cpp/ggml/src/libggml.so  vendor/llama.cpp/ggml/src/ggml-cpu/libggml-cpu.so  vendor/llama.cpp/ggml/src/ggml-amx/libggml-amx.so  vendor/llama.cpp/ggml/src/libggml-base.so && : 
      /home/kartik/miniconda3/compiler_compat/ld: warning: libgomp.so.1, needed by vendor/llama.cpp/ggml/src/ggml-cpu/libggml-cpu.so, not found (try using -rpath or -rpath-link) 
      /home/kartik/miniconda3/compiler_compat/ld: vendor/llama.cpp/ggml/src/ggml-cpu/libggml-cpu.so: undefined reference to `GOMP_barrier@GOMP_1.0' 
      /home/kartik/miniconda3/compiler_compat/ld: vendor/llama.cpp/ggml/src/ggml-cpu/libggml-cpu.so: undefined reference to `GOMP_parallel@GOMP_4.0' 
      /home/kartik/miniconda3/compiler_compat/ld: vendor/llama.cpp/ggml/src/ggml-cpu/libggml-cpu.so: undefined reference to `omp_get_thread_num@OMP_1.0' 
      /home/kartik/miniconda3/compiler_compat/ld: vendor/llama.cpp/ggml/src/ggml-cpu/libggml-cpu.so: undefined reference to `GOMP_single_start@GOMP_1.0' 
      /home/kartik/miniconda3/compiler_compat/ld: vendor/llama.cpp/ggml/src/ggml-cpu/libggml-cpu.so: undefined reference to `omp_get_num_threads@OMP_1.0' 
      collect2: error: ld returned 1 exit status 
      ninja: build stopped: subcommand failed. 
      *** CMake build failed 
      [end of output] 
  note: This error originates from a subprocess, and is likely not a problem with pip. 
  ERROR: Failed building wheel for llama-cpp-python 
Failed to build llama-cpp-python 
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (llama-cpp-python) 
TheBlewish commented 1 day ago

Here is some help brought to you by chatGPT!: The error you're encountering when attempting to build the llama-cpp-python package indicates that the build process is failing due to missing dependencies related to OpenMP (libgomp.so.1) and possible issues with the build environment configuration. Here's a step-by-step guide to resolve it:

  1. Ensure OpenMP Support is Installed

The error mentions missing libgomp.so.1, which is part of the GNU OpenMP implementation. Install it using the following command:

sudo apt update sudo apt install libgomp1

  1. Check and Install Required Build Tools

Ensure you have all necessary build tools and libraries installed:

sudo apt install build-essential cmake ninja-build libomp-dev

  1. Verify GCC and G++ Versions

Ensure the GCC and G++ compilers are compatible. llama-cpp-python may require a version that supports the needed features (you have GCC 11.4.0, which should be fine). If needed, update GCC:

sudo apt install gcc g++

  1. Use a Virtual Environment

Using a virtual environment ensures a clean Python environment:

python3 -m venv llama-env source llama-env/bin/activate

  1. Set the Correct Environment Variables

Some Python environments (e.g., Conda) override system paths, leading to errors. Set the environment variables to use system-installed compilers:

export CC=/usr/bin/gcc export CXX=/usr/bin/g++ export LDFLAGS="-L/usr/lib/x86_64-linux-gnu" export LD_LIBRARY_PATH="/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH"

  1. Reinstall Dependencies

Reinstall the package and its dependencies in the virtual environment:

pip install -r requirements.txt

  1. Build and Install Manually

If pip continues to fail, manually build and install the package:

git clone https://github.com/abetlen/llama-cpp-python.git cd llama-cpp-python pip install .

synth-mania commented 1 day ago

I ran into a similar issue. Make sure gcc and g++ are installed.

slyfox1186 commented 1 day ago

You might get some ideas from one of my personal scripts for ways you can make this work. I know you need to have nvcc in the PATH.

GitHub

pleabargain commented 1 day ago

no joy on win 11 I've installed vs_BuildTools

no joy cmake is installed

I tried the direct build of cpp as well.

git clone https://github.com/abetlen/llama-cpp-python.git cd llama-cpp-python pip install .

It failed too.

CMake configuration failed
 [end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama_cpp_python
Failed to build llama_cpp_python
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (llama_cpp_python)

Aside from dumping windows(a good idea!) what other things can I try?

thank you!

TheBlewish commented 1 day ago

Yeah I don't use windows and I don't think windows will work with the program sorry!

synth-mania commented 20 hours ago

Try windows subsystem for linux

slyfox1186 commented 17 hours ago

@synth-mania That is what I use!