sobelio / llm-chain

`llm-chain` is a powerful rust crate for building chains in large language models allowing you to summarise text and complete complex tasks
https://llm-chain.xyz
MIT License
1.27k stars 127 forks source link

Fix CUDA build for llm-chain-llama-sys #261

Closed mhlakhani closed 5 months ago

mhlakhani commented 6 months ago

This fixes the build when attempting to build llm-chain-llama-sys with CUDA enabled (by setting CARGO_FEATURE_CUDA).

Without this PR, builds fail with errors similar to https://github.com/ggerganov/llama.cpp/issues/1728

I spent some time coming up with a solution that just worked on my machine before reading the comment at the top of the file which references https://github.com/tazz4843/whisper-rs/blob/master/sys/build.rs - which already had a cleaner cross-platform solution so I just copied that.

After this PR I can successfully build llm-chain-llama-sys with CUDA support (confirmed by setting the environment flag, and running a test app on my machine).

andychenbruce commented 5 months ago

For me on Ubuntu having this as my build.rs works:

fn main() {
    let stuff: &[&str] = &[
        "cublas", "culibos", "cudart", "cublasLt", "pthread", "dl", "rt",
    ];
    for i in stuff {
        println!("cargo:rustc-link-arg=-l{}", i);
    }
}
mhlakhani commented 5 months ago

https://github.com/sobelio/llm-chain/pull/266 does this a little better