ggerganov / llama.cpp

LLM inference in C/C++
MIT License
65.29k stars 9.35k forks source link

ERROR: Failed building wheel for llama-cpp-python usig cmake #3172

Closed icecoldt369 closed 5 months ago

icecoldt369 commented 1 year ago

I am trying to launch llama-2 from the oobabooga_macos repo but am encountering errors on my MacOS as stated below: ERROR: Failed building wheel for llama-cpp-python Failed to build llama-cpp-python ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

Commands I ran

Installing llama.cpp with cmake and metal enabled

I run the command: CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python

This is my log output: × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [103 lines of output] scikit-build-core 0.5.0 using CMake 3.27.4 (wheel) Configuring CMake... 2023-09-14 16:56:07,388 - scikit_build_core - WARNING - libdir/ldlibrary: /Library/Frameworks/Python.framework/Versions/3.11/lib/Python.framework/Versions/3.11/Python is not a real file! loading initial cache file /var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/tmpspa24b39/build/CMakeInit.txt -- The C compiler identification is AppleClang 11.0.3.11030032 -- The CXX compiler identification is AppleClang 11.0.3.11030032 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /Library/Developer/CommandLineTools/usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /Library/Developer/CommandLineTools/usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found Git: /usr/local/bin/git (found version "2.42.0") fatal: not a git repository (or any of the parent directories): .git fatal: not a git repository (or any of the parent directories): .git CMake Warning at vendor/llama.cpp/CMakeLists.txt:125 (message): Git repository not found; to enable automatic generation of build info, make sure Git is installed and the project is a Git repository.

-- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Accelerate framework found -- Metal framework found -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- x86 detected CMake Warning (dev) at vendor/llama.cpp/CMakeLists.txt:676 (install): Target llama has RESOURCE files but no RESOURCE DESTINATION. This warning is for project developers. Use -Wno-dev to suppress it.

-- Configuring done (0.8s) -- Generating done (0.0s) -- Build files have been written to: /var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/tmpspa24b39/build *** Building project with Ninja... Change Dir: '/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/tmpspa24b39/build'

Run Build Command(s): /private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-build-env-74q_89qy/normal/lib/python3.11/site-packages/ninja/data/bin/ninja -v [1/11] /Library/Developer/CommandLineTools/usr/bin/cc -DGGML_USE_ACCELERATE -DGGML_USE_K_QUANTS -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/. -F/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks -march=native -mtune=native -O3 -DNDEBUG -std=gnu11 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Wno-unused-function -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o -c /private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/ggml-alloc.c [2/11] /Library/Developer/CommandLineTools/usr/bin/c++ -DGGML_USE_ACCELERATE -DGGML_USE_K_QUANTS -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/common/. -I/private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/. -march=native -mtune=native -O3 -DNDEBUG -std=gnu++11 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o -c /private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/common/console.cpp [3/11] /Library/Developer/CommandLineTools/usr/bin/cc -DGGML_USE_ACCELERATE -DGGML_USE_K_QUANTS -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/. -F/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks -march=native -mtune=native -O3 -DNDEBUG -std=gnu11 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Wno-unused-function -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o -c /private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/ggml-metal.m FAILED: vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o /Library/Developer/CommandLineTools/usr/bin/cc -DGGML_USE_ACCELERATE -DGGML_USE_K_QUANTS -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/. -F/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks -march=native -mtune=native -O3 -DNDEBUG -std=gnu11 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Wno-unused-function -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-metal.m.o -c /private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/ggml-metal.m /private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/ggml-metal.m:613:5: error: use of undeclared identifier 'MTLComputePassDescriptor' MTLComputePassDescriptor edesc = MTLComputePassDescriptor.computePassDescriptor; ^ /private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/ggml-metal.m:613:32: error: use of undeclared identifier 'edesc' MTLComputePassDescriptor edesc = MTLComputePassDescriptor.computePassDescriptor; ^ /private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/ggml-metal.m:613:40: error: use of undeclared identifier 'MTLComputePassDescriptor' MTLComputePassDescriptor * edesc = MTLComputePassDescriptor.computePassDescriptor; ^ /private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/ggml-metal.m:618:5: error: use of undeclared identifier 'edesc' edesc.dispatchType = has_concur ? MTLDispatchTypeConcurrent : MTLDispatchTypeSerial; ^ /private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/ggml-metal.m:631:61: warning: instance method '-computeCommandEncoderWithDescriptor:' not found (return type defaults to 'id') [-Wobjc-method-access] ctx->command_encoders[i] = [ctx->command_buffers[i] computeCommandEncoderWithDescriptor: edesc]; ^~~~~~~~~~~ /private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/ggml-metal.m:631:98: error: use of undeclared identifier 'edesc' ctx->command_encoders[i] = [ctx->command_buffers[i] computeCommandEncoderWithDescriptor: edesc]; ^ /private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/ggml-metal.m:873:61: error: use of undeclared identifier 'MTLGPUFamilyApple7' [ctx->device supportsFamily:MTLGPUFamilyApple7] && ^ 1 warning and 6 errors generated. [4/11] /Library/Developer/CommandLineTools/usr/bin/cc -DGGML_USE_ACCELERATE -DGGML_USE_K_QUANTS -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/. -F/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks -march=native -mtune=native -O3 -DNDEBUG -std=gnu11 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Wno-unused-function -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/k_quants.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/k_quants.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/k_quants.c.o -c /private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/k_quants.c [5/11] /Library/Developer/CommandLineTools/usr/bin/c++ -DGGML_USE_ACCELERATE -DGGML_USE_K_QUANTS -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/common/. -I/private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/. -march=native -mtune=native -O3 -DNDEBUG -std=gnu++11 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/grammar-parser.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/grammar-parser.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/grammar-parser.cpp.o -c /private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/common/grammar-parser.cpp [6/11] /Library/Developer/CommandLineTools/usr/bin/c++ -DGGML_USE_ACCELERATE -DGGML_USE_K_QUANTS -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/common/. -I/private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/. -march=native -mtune=native -O3 -DNDEBUG -std=gnu++11 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o -c /private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/common/common.cpp [7/11] /Library/Developer/CommandLineTools/usr/bin/c++ -DGGML_USE_ACCELERATE -DGGML_USE_K_QUANTS -DGGML_USE_METAL -DLLAMA_BUILD -DLLAMA_SHARED -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -Dllama_EXPORTS -I/private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/. -F/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks -march=native -mtune=native -O3 -DNDEBUG -std=gnu++11 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -MD -MT vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -MF vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o.d -o vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -c /private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/llama.cpp [8/11] /Library/Developer/CommandLineTools/usr/bin/cc -DGGML_USE_ACCELERATE -DGGML_USE_K_QUANTS -DGGML_USE_METAL -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE=600 -I/private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/. -F/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks -march=native -mtune=native -O3 -DNDEBUG -std=gnu11 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Wno-unused-function -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -c /private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/ggml.c /private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/ggml.c:2391:5: warning: implicit conversion increases floating-point precision: 'float' to 'ggml_float' (aka 'double') [-Wdouble-promotion] GGML_F16_VEC_REDUCE(sumf, sum); ^~~~~~~~~~ /private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/ggml.c:2023:37: note: expanded from macro 'GGML_F16_VEC_REDUCE'

define GGML_F16_VEC_REDUCE GGML_F32Cx8_REDUCE

                                  ^

/private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/ggml.c:2013:33: note: expanded from macro 'GGML_F32Cx8_REDUCE'

define GGML_F32Cx8_REDUCE GGML_F32x8_REDUCE

                              ^

/private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/ggml.c:1959:11: note: expanded from macro 'GGML_F32x8_REDUCE' res = _mm_cvtss_f32(_mm_hadd_ps(t1, t1)); \ ~ ^~~~~~~~~~ /private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/ggml.c:3657:9: warning: implicit conversion increases floating-point precision: 'float' to 'ggml_float' (aka 'double') [-Wdouble-promotion] GGML_F16_VEC_REDUCE(sumf[k], sum[k]); ^~~~~~~~ /private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/ggml.c:2023:37: note: expanded from macro 'GGML_F16_VEC_REDUCE'

define GGML_F16_VEC_REDUCE GGML_F32Cx8_REDUCE

                                  ^

/private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/ggml.c:2013:33: note: expanded from macro 'GGML_F32Cx8_REDUCE'

define GGML_F32Cx8_REDUCE GGML_F32x8_REDUCE

                              ^

/private/var/folders/4l/zwvr5hz51gvbhqkcpm0lhljc0000gn/T/pip-install-m27s3ma3/llama-cpp-python_f6f76d2d0d8746a8af06996220ab80a1/vendor/llama.cpp/ggml.c:1959:11: note: expanded from macro 'GGML_F32x8_REDUCE' res = _mm_cvtss_f32(_mm_hadd_ps(t1, t1)); \ ~ ^~~~~~~~~~ 2 warnings generated. ninja: build stopped: subcommand failed.

*** CMake build failed [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for llama-cpp-python Failed to build llama-cpp-python ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

#######################################################################################

*Building llama.cpp standalone

I run the command: git clone https://github.com/ggerganov/llama.cpp cd llama.cpp mkdir build cd build cmake .. cmake --build . --config Release

This is my log output:

remote: Enumerating objects: 8862, done. remote: Counting objects: 100% (8862/8862), done. remote: Compressing objects: 100% (2696/2696), done. remote: Total 8862 (delta 6147), reused 8782 (delta 6106), pack-reused 0 Receiving objects: 100% (8862/8862), 8.29 MiB | 7.51 MiB/s, done. Resolving deltas: 100% (6147/6147), done. -- The C compiler identification is AppleClang 11.0.3.11030032 -- The CXX compiler identification is AppleClang 11.0.3.11030032 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /Library/Developer/CommandLineTools/usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /Library/Developer/CommandLineTools/usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found Git: /usr/local/bin/git (found version "2.42.0") -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Accelerate framework found -- Metal framework found -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- x86 detected -- Configuring done (1.5s) -- Generating done (0.9s) -- Build files have been written to: /Users/tevykuch/V2-Langchain/oobabooga_macos/llama.cpp/build [ 1%] Built target BUILD_INFO [ 2%] Building C object CMakeFiles/ggml.dir/ggml.c.o /Users/tevykuch/V2-Langchain/oobabooga_macos/llama.cpp/ggml.c:2391:5: warning: implicit conversion increases floating-point precision: 'float' to 'ggml_float' (aka 'double') [-Wdouble-promotion] GGML_F16_VEC_REDUCE(sumf, sum); ^~~~~~~~~~ /Users/tevykuch/V2-Langchain/oobabooga_macos/llama.cpp/ggml.c:2023:37: note: expanded from macro 'GGML_F16_VEC_REDUCE'

define GGML_F16_VEC_REDUCE GGML_F32Cx8_REDUCE

^ /Users/tevykuch/V2-Langchain/oobabooga_macos/llama.cpp/ggml.c:2013:33: note: expanded from macro 'GGML_F32Cx8_REDUCE'

define GGML_F32Cx8_REDUCE GGML_F32x8_REDUCE

^ /Users/tevykuch/V2-Langchain/oobabooga_macos/llama.cpp/ggml.c:1959:11: note: expanded from macro 'GGML_F32x8_REDUCE' res = _mm_cvtss_f32(_mm_hadd_ps(t1, t1)); ~ ^~~~~~~~~~ /Users/tevykuch/V2-Langchain/oobabooga_macos/llama.cpp/ggml.c:3657:9: warning: implicit conversion increases floating-point precision: 'float' to 'ggml_float' (aka 'double') [-Wdouble-promotion] GGML_F16_VEC_REDUCE(sumf[k], sum[k]); ^~~~~~~~ /Users/tevykuch/V2-Langchain/oobabooga_macos/llama.cpp/ggml.c:2023:37: note: expanded from macro 'GGML_F16_VEC_REDUCE'

define GGML_F16_VEC_REDUCE GGML_F32Cx8_REDUCE

^ /Users/tevykuch/V2-Langchain/oobabooga_macos/llama.cpp/ggml.c:2013:33: note: expanded from macro 'GGML_F32Cx8_REDUCE'

define GGML_F32Cx8_REDUCE GGML_F32x8_REDUCE

^ /Users/tevykuch/V2-Langchain/oobabooga_macos/llama.cpp/ggml.c:1959:11: note: expanded from macro 'GGML_F32x8_REDUCE' res = _mm_cvtss_f32(_mm_hadd_ps(t1, t1)); ~ ^~~~~~~~~~ 2 warnings generated. [ 4%] Building C object CMakeFiles/ggml.dir/ggml-alloc.c.o [ 5%] Building C object CMakeFiles/ggml.dir/ggml-metal.m.o /Users/tevykuch/V2-Langchain/oobabooga_macos/llama.cpp/ggml-metal.m:613:5: error: use of undeclared identifier 'MTLComputePassDescriptor' MTLComputePassDescriptor edesc = MTLComputePassDescriptor.computePassDescriptor; ^ /Users/tevykuch/V2-Langchain/oobabooga_macos/llama.cpp/ggml-metal.m:613:32: error: use of undeclared identifier 'edesc' MTLComputePassDescriptor edesc = MTLComputePassDescriptor.computePassDescriptor; ^ /Users/tevykuch/V2-Langchain/oobabooga_macos/llama.cpp/ggml-metal.m:613:40: error: use of undeclared identifier 'MTLComputePassDescriptor' MTLComputePassDescriptor * edesc = MTLComputePassDescriptor.computePassDescriptor; ^ /Users/tevykuch/V2-Langchain/oobabooga_macos/llama.cpp/ggml-metal.m:618:5: error: use of undeclared identifier 'edesc' edesc.dispatchType = has_concur ? MTLDispatchTypeConcurrent : MTLDispatchTypeSerial; ^ /Users/tevykuch/V2-Langchain/oobabooga_macos/llama.cpp/ggml-metal.m:631:61: warning: instance method '-computeCommandEncoderWithDescriptor:' not found (return type defaults to 'id') [-Wobjc-method-access] ctx->command_encoders[i] = [ctx->command_buffers[i] computeCommandEncoderWithDescriptor: edesc]; ^~~~~~~~~~~ /Users/tevykuch/V2-Langchain/oobabooga_macos/llama.cpp/ggml-metal.m:631:98: error: use of undeclared identifier 'edesc' ctx->command_encoders[i] = [ctx->command_buffers[i] computeCommandEncoderWithDescriptor: edesc]; ^ /Users/tevykuch/V2-Langchain/oobabooga_macos/llama.cpp/ggml-metal.m:873:61: error: use of undeclared identifier 'MTLGPUFamilyApple7' [ctx->device supportsFamily:MTLGPUFamilyApple7] && ^ 1 warning and 6 errors generated. make[2]: [CMakeFiles/ggml.dir/ggml-metal.m.o] Error 1 make[1]: [CMakeFiles/ggml.dir/all] Error 2

joseberlines commented 9 months ago

I have the same error

rigvedrs commented 7 months ago

Maybe this answer could help

github-actions[bot] commented 5 months ago

This issue was closed because it has been inactive for 14 days since being marked as stale.