mdrokz / rust-llama.cpp

LLama.cpp rust bindings
https://crates.io/crates/llama_cpp_rs/
MIT License
290 stars 42 forks source link

Feature flag metal: Fails to load model when n_gpu_layers > 0 #18

Open phudtran opened 8 months ago

phudtran commented 8 months ago

Can't utilize GPU on Mac with

llama_cpp_rs = { git = "https://github.com/mdrokz/rust-llama.cpp", version = "0.3.0", features = [
    "metal",
] }

Code

use llama_cpp_rs::{
    options::{ModelOptions, PredictOptions},
    LLama,
};
fn main() {
    let model_options = ModelOptions {
        n_gpu_layers: 1,
        ..Default::default()
    };

    let llama = LLama::new("zephyr-7b-alpha.Q2_K.gguf".into(), &model_options);
    println!("llama: {:?}", llama);
    let predict_options = PredictOptions {
        tokens: 0,
        threads: 14,
        top_k: 90,
        top_p: 0.86,
        token_callback: Some(Box::new(|token| {
            println!("token1: {}", token);

            true
        })),
        ..Default::default()
    };

    llama
        .unwrap()
        .predict(
            "what are the national animals of india".into(),
            predict_options,
        )
        .unwrap();
}

Error

llama_new_context_with_model: kv self size  =   64.00 MB
llama_new_context_with_model: ggml_metal_init() failed
llama: Err("Failed to load model")
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: "Failed to load model"', src/main.rs:40:10
mdrokz commented 8 months ago

Can't utilize GPU on Mac with

llama_cpp_rs = { git = "https://github.com/mdrokz/rust-llama.cpp", version = "0.3.0", features = [
    "metal",
] }

Code

use llama_cpp_rs::{
    options::{ModelOptions, PredictOptions},
    LLama,
};
fn main() {
    let model_options = ModelOptions {
        n_gpu_layers: 1,
        ..Default::default()
    };

    let llama = LLama::new("zephyr-7b-alpha.Q2_K.gguf".into(), &model_options);
    println!("llama: {:?}", llama);
    let predict_options = PredictOptions {
        tokens: 0,
        threads: 14,
        top_k: 90,
        top_p: 0.86,
        token_callback: Some(Box::new(|token| {
            println!("token1: {}", token);

            true
        })),
        ..Default::default()
    };

    llama
        .unwrap()
        .predict(
            "what are the national animals of india".into(),
            predict_options,
        )
        .unwrap();
}

Error

llama_new_context_with_model: kv self size  =   64.00 MB
llama_new_context_with_model: ggml_metal_init() failed
llama: Err("Failed to load model")
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: "Failed to load model"', src/main.rs:40:10

Hmm weird i dont have a mac available currently to test this, i will try to see about this. Thanks

zackshen commented 8 months ago

i have the same problem on my Apple M1.

zackshen commented 7 months ago

@phudtran i have found root cause. you should put the ggml-metal.metal file next to your binary. i found disable the debug log print build.rs for building metal feature. so print more log to find the error.

build.rs

fn compile_metal(cx: &mut Build, cxx: &mut Build) {
    cx.flag("-DGGML_USE_METAL").flag("-DGGML_METAL_NDEBUG");
    cxx.flag("-DGGML_USE_METAL");

    println!("cargo:rustc-link-lib=framework=Metal");
    println!("cargo:rustc-link-lib=framework=Foundation");
    println!("cargo:rustc-link-lib=framework=MetalPerformanceShaders");
    println!("cargo:rustc-link-lib=framework=MetalKit");

    cx.include("./llama.cpp/ggml-metal.h")
        .file("./llama.cpp/ggml-metal.m");
}

disable GGML_METAL_NDEBUG

fn compile_metal(cx: &mut Build, cxx: &mut Build) {
    cx.flag("-DGGML_USE_METAL"); // <==============  enable print debug log.
    cxx.flag("-DGGML_USE_METAL");

    println!("cargo:rustc-link-lib=framework=Metal");
    println!("cargo:rustc-link-lib=framework=Foundation");
    println!("cargo:rustc-link-lib=framework=MetalPerformanceShaders");
    println!("cargo:rustc-link-lib=framework=MetalKit");

    cx.include("./llama.cpp/ggml-metal.h")
        .file("./llama.cpp/ggml-metal.m");
}

@mdrokz Should add some flags to enable(disable) the debug log ?

hugonijmek commented 7 months ago

@zackshen I've tried adding the ggml-metal.metal file next to the binary, but now I get the following message: -[MTLComputePipelineDescriptorInternal setComputeFunction:withType:]:722: failed assertion 'computeFunction must not be nil.'

zackshen commented 7 months ago

@zackshen I've tried adding the ggml-metal.metal file next to the binary, but now I get the following message:

-[MTLComputePipelineDescriptorInternal setComputeFunction:withType:]:722: failed assertion 'computeFunction must not be nil.'

I have never seen this error before. just modified the example code in the this repo for testing gpu utilization. Can you show your code ?

mdrokz commented 7 months ago

@phudtran i have found root cause. you should put the ggml-metal.metal file next to your binary. i found disable the debug log print build.rs for building metal feature. so print more log to find the error.

build.rs

fn compile_metal(cx: &mut Build, cxx: &mut Build) {
    cx.flag("-DGGML_USE_METAL").flag("-DGGML_METAL_NDEBUG");
    cxx.flag("-DGGML_USE_METAL");

    println!("cargo:rustc-link-lib=framework=Metal");
    println!("cargo:rustc-link-lib=framework=Foundation");
    println!("cargo:rustc-link-lib=framework=MetalPerformanceShaders");
    println!("cargo:rustc-link-lib=framework=MetalKit");

    cx.include("./llama.cpp/ggml-metal.h")
        .file("./llama.cpp/ggml-metal.m");
}

disable GGML_METAL_NDEBUG

fn compile_metal(cx: &mut Build, cxx: &mut Build) {
    cx.flag("-DGGML_USE_METAL"); // <==============  enable print debug log.
    cxx.flag("-DGGML_USE_METAL");

    println!("cargo:rustc-link-lib=framework=Metal");
    println!("cargo:rustc-link-lib=framework=Foundation");
    println!("cargo:rustc-link-lib=framework=MetalPerformanceShaders");
    println!("cargo:rustc-link-lib=framework=MetalKit");

    cx.include("./llama.cpp/ggml-metal.h")
        .file("./llama.cpp/ggml-metal.m");
}

@mdrokz Should add some flags to enable(disable) the debug log ?

I will add an option for enabling / disabling debug

genbit commented 6 months ago

Encountered the same error. Placing ggml-metal.metal into the project directory leads to the same error as @hugonijmek have seen.

However, this solves the original issue: setting the following env variable to point to llama.cpp sources GGML_METAL_PATH_RESOURCES=/rust-llama.cpp/llama.cpp/ solves the issue. (https://github.com/ggerganov/whisper.cpp/blob/master/ggml-metal.m#L261)

tbogdala commented 6 months ago

If you want to include it in the build so you don't have to worry about having the shader file parallel or using the environment variable, you can use the solution from the rustformers/llm respository: https://github.com/rustformers/llm/commit/9d39ff8cc0a89bb22cc17bdc1dd2470f3421d788

To get it working, update the needle to the current string.

The file this puts in the output directory has a prefix to 'ggml-metal.o' so when checking the ggml_type in compile_llama, check for "metal" and if so, search the directory for the file using a call to ends_with("-ggml-metal.o") and then add that with cxx.object(metal_path).