utilityai / llama-cpp-rs

166 stars 49 forks source link

fatal error: 'ggml.h' file not found #533

Open samuelint opened 1 month ago

samuelint commented 1 month ago

Building from project root or example.

> cargo build # Fail
> cd examples/simple
> cargo build # Fail

leads to the following error:

cargo build
   Compiling llama-cpp-sys-2 v0.1.83 (/llama-cpp-rs/llama-cpp-sys-2)
error: failed to run custom build command for `llama-cpp-sys-2 v0.1.83 (/llama-cpp-rs/llama-cpp-sys-2)`

Caused by:
  process didn't exit successfully: `/llama-cpp-rs/target/debug/build/llama-cpp-sys-2-905f5f3cc6aa6cb5/build-script-build` (exit status: 101)
  --- stdout
  cargo:rerun-if-env-changed=TARGET
  cargo:rerun-if-env-changed=BINDGEN_EXTRA_CLANG_ARGS_aarch64-apple-darwin
  cargo:rerun-if-env-changed=BINDGEN_EXTRA_CLANG_ARGS_aarch64_apple_darwin
  cargo:rerun-if-env-changed=BINDGEN_EXTRA_CLANG_ARGS
  cargo:rerun-if-changed=wrapper.h

  --- stderr
  ./llama.cpp/include/llama.h:4:10: fatal error: 'ggml.h' file not found
  thread 'main' panicked at llama-cpp-sys-2/build.rs:197:10:
  Failed to generate bindings: ClangDiagnostic("./llama.cpp/include/llama.h:4:10: fatal error: 'ggml.h' file not found\n")
  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

However, building individually llama-cpp-2 & llama-cpp-sys-2 works successfully.

> cd llama-cpp-2
> cargo build # Pass
> cd llama-cpp-sys-2
> cargo build # Pass

What need to be done to have the project built on Mac?

I've tried to explicitly declare --features metal and it doesn't fix the problem.

I've followed steps described in the Hacking section of the readme https://github.com/utilityai/llama-cpp-rs/tree/main?tab=readme-ov-file#hacking

brittlewis12 commented 1 month ago

@samuelint i invoke the simple binary from the root of the repo like this:

cargo run --release --bin simple --features metal -- --n-len=2048 --prompt "<|start_header_id|>user<|end_header_id|>\n\nshare 5 reasons rust is better than c++<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" local ~/models/llama-3.2-3b-instruct.Q6_K.gguf
samuelint commented 1 month ago

That command works. It seem the problem is the debug build. It only works in release.

How to make in work in debug? I also get the error with vscode rust-analyzer which is debug by default. This problem prevents me to have the IDE highlighting errors.

brittlewis12 commented 1 month ago

hmm interesting, i have no problems with removing the release flag and performing a build rather than a run:

cargo b --bin simple --features metal
samuelint commented 1 month ago

@brittlewis12 on which commit is your llama.cpp project?

brittlewis12 commented 1 month ago

it appears pinned to 8f1d81a0

arrowban commented 1 month ago

@brittlewis12 I had a similar issue when running the simple example on macOS 15.0.1:

cargo run --release --bin simple -- --prompt "The way to kill a linux process is" hf-model TheBloke/Llama-2-7B-GGUF llama-2-7b.Q4_K_M.gguf
   Compiling llama-cpp-sys-2 v0.1.83 (/Users/arrowban/tauri-app/llama-cpp-rs/llama-cpp-sys-2)
   Compiling icu_normalizer v1.5.0
   Compiling clap v4.5.19
   Compiling idna v1.0.1
   Compiling url v2.5.1
   Compiling ureq v2.9.7
error: failed to run custom build command for `llama-cpp-sys-2 v0.1.83 (/Users/arrowban/tauri-app/llama-cpp-rs/llama-cpp-sys-2)`

Caused by:
  process didn't exit successfully: `/Users/arrowban/tauri-app/llama-cpp-rs/target/release/build/llama-cpp-sys-2-0c4fc171384fe637/build-script-build` (exit status: 101)
  --- stdout
  cargo:rerun-if-env-changed=TARGET
  cargo:rerun-if-env-changed=BINDGEN_EXTRA_CLANG_ARGS_aarch64-apple-darwin
  cargo:rerun-if-env-changed=BINDGEN_EXTRA_CLANG_ARGS_aarch64_apple_darwin
  cargo:rerun-if-env-changed=BINDGEN_EXTRA_CLANG_ARGS
  cargo:rerun-if-changed=wrapper.h

  --- stderr
  ./llama.cpp/include/llama.h:4:10: fatal error: 'ggml.h' file not found
  thread 'main' panicked at llama-cpp-sys-2/build.rs:197:10:
  Failed to generate bindings: ClangDiagnostic("./llama.cpp/include/llama.h:4:10: fatal error: 'ggml.h' file not found\n")
  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...

It was fixed after deleting everything with rm -rf llama-cpp-rs and running the instructions again from scratch:

git clone --recursive https://github.com/utilityai/llama-cpp-rs
cd llama-cpp-rs

cargo run --release --bin simple -- --prompt "The way to kill a linux process is" hf-model TheBloke/Llama-2-7B-GGUF llama-2-7b.Q4_K_M.gguf

I'm not sure why it didn't work the first time I tried, but maybe it's because the first time around I cloned the repo without the recursive flag, and after trying cargo run ..., initiated submodules

vargad commented 3 weeks ago

@arrowban same here on linux