sobelio / llm-chain

`llm-chain` is a powerful rust crate for building chains in large language models allowing you to summarise text and complete complex tasks
https://llm-chain.xyz
MIT License
1.27k stars 127 forks source link

add support of Ollama #265

Closed prabirshrestha closed 5 months ago

prabirshrestha commented 5 months ago

Add support for Ollama which allows running LLM locally. Could reuse https://crates.io/crates/ollama-rs

kfehlhauer commented 5 months ago

Use the llm-chain-local crate and pull the modesl from hugging face seems to be supported. I would give that a go. https://github.com/sobelio/llm-chain/tree/main/crates/llm-chain-local

prabirshrestha commented 5 months ago

For my use case I needed it to be a server since I'm running my app on NAS, but the NAS is not powerful enough to run LLMs.

williamhogman commented 5 months ago

This seems like a good idea

kfehlhauer commented 5 months ago

Ollama has released compatibility with the OpenAI API. https://ollama.com/blog/openai-compatibility

To use it set the envar: export OPENAI_API_BASE_URL=http://localhost:11434/v1

Then this code will work with the main branch of llm-chain and llm-chain-openai.

use llm_chain::options;
use llm_chain::options::ModelRef;
use llm_chain::{executor, parameters, prompt};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let opts = options!(
         Model: ModelRef::from_model_name("codellama")
    );
    let exec = executor!(chatgpt, opts.clone())?;
    let query = r#"How do I use the "match" keyword in Rust?"#;
    println!("Query: {query}\n");
    let res = prompt!("", query,).run(&parameters!(), &exec).await;
    match res {
        Ok(res) => println!("AI:\n{res}"),
        Err(e) => println!("Error: {e}"),
    }
    Ok(())
}
prabirshrestha commented 5 months ago

How do I set the api key? It is the same whether I set export OPENAI_API_KEY=ollama or use the code to set. Using curl works.

/Users/prabirshrestha/code/tmp/rust-llm$ env | grep OPENAI
OPENAI_API_BASE_URL=http://localhost:11434/v1
OPENAI_API_KEY=ollama

/Users/prabirshrestha/code/tmp/rust-llm$ cargo run
    Finished dev [unoptimized + debuginfo] target(s) in 0.07s
     Running `target/debug/rust-llm`
Query: How do I use the "match" keyword in Rust?

Error: Error executing: Unable to run model: Some("invalid_request_error"): Incorrect API key provided: ollama. You can find your API key at https://platform.openai.com/account/api-keys.

/Users/prabirshrestha/code/tmp/rust-llm$ curl http://localhost:11434/v1/chat/completions -H "Authorization: Bearer ollama"     -H "Content-Type: application/json"     -d '{
        "model": "llama2",
        "messages": [
            {
                "role": "system",
                "content": "You are a helpful assistant."
            },
            {
                "role": "user",
                "content": "Hello!"
            }
        ]
    }'
{"id":"chatcmpl-839","object":"chat.completion","created":1707706686,"model":"llama2","system_fingerprint":"fp_ollama","choices":[{"index":0,"message":{"role":"assistant","content":"Hello there! It's nice to meet you. Is there something I can help you with or would you like me to assist you in any way? Please let me know how I can be of assistance."},"finish_reason":"stop"}],"usage":{"prompt_tokens":0,"completion_tokens":43,"total_tokens":43}}

Would be great if options! supported BaseUrl similar to ApiKey and OptionsBulider.add_option was chain able. Will create a different issue for this.

kfehlhauer commented 5 months ago

How do I set the api key? It is the same whether I set export OPENAI_API_KEY=ollama or use the code to set. Using curl works.

You should not need to set the OPEN_API_KEY for Ollama models coming from https://ollama.com/. That envar is only needed when using an endpoint that requires an API key like openAI. In case the llm-chain lib requires it, you could set it like this: export OPENAI_API_KEY=fakekey

As to where envars are set, on MacOS I would set it in the ~/.zshrc file. On Linux in /.bashrc or wherever your shell requires you to.

You could do this programmatically too.

use std::env;

env::set_var("OPENAI_API_KEY", "fakekey");
kfehlhauer commented 5 months ago

@williamhogman I believe this issue can be closed out given https://ollama.com/blog/openai-compatibility

jaredmcqueen commented 5 months ago

i'm having trouble with this too @kfehlhauer

ollama output:

❯ ollama ls
NAME                    ID              SIZE    MODIFIED
codellama:latest        8fdf8f752f6e    3.8 GB  2 seconds ago
mistral:latest          073b6c495da3    4.1 GB  2 days ago

cargo.toml:

[package]
name = "my-llm-project"
version = "0.1.0"
edition = "2021"

[dependencies]
llm-chain = "0.13.0"
llm-chain-openai = "0.13.0"
tokio = { version = "1.36.0", features = ["full"] }

main.rs

use llm_chain::options;
use llm_chain::options::ModelRef;
use llm_chain::{executor, parameters, prompt};
use std::env;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    env::set_var("OPENAI_API_BASE_URL", "http://localhost:11434/v1");
    env::set_var("OPENAI_API_KEY", "ollama");

    let opts = options!(
         Model: ModelRef::from_model_name("codellama")
    );
    let exec = executor!(chatgpt, opts.clone())?;
    let query = r#"How do I use the "match" keyword in Rust?"#;
    println!("Query: {query}\n");
    let res = prompt!("", query,).run(&parameters!(), &exec).await;
    match res {
        Ok(res) => println!("AI:\n{res}"),
        Err(e) => println!("Error: {e}"),
    }
    Ok(())
}

error:

❯ cargo run
   Compiling my-llm-project v0.1.0 (/Users/jared/Projects/rust/my-llm-project)
    Finished dev [unoptimized + debuginfo] target(s) in 0.52s
     Running `target/debug/my-llm-project`
Query: How do I use the "match" keyword in Rust?

Error: Error executing: Unable to run model: Some("invalid_request_error"): Incorrect API key provided: ollama. You can find your API key at https://platform.openai.com/account/api-keys.
kfehlhauer commented 5 months ago

@jaredmcqueen Pulling from crates.io won't work. You need to clone the llm-chain repo and build it locally.

Then change your Cargo.toml to this:

[dependencies]
llm-chain = {path = "/<PATH TO REPO>/llm-chain/crates/llm-chain"}
llm-chain-openai = {path = "/<PATH TO REPO>/llm-chain/crates/llm-chain-openai"}
tokio = { version = "1.35.1", features = ["full"] }
prabirshrestha commented 5 months ago

Verified that it works with latest main branch. Probably would be great if there was a new release. Thanks!