Abraxas-365 / langchain-rust

🦜️🔗LangChain for Rust, the easiest way to write LLM-based programs in Rust
MIT License
632 stars 83 forks source link

Sporadic failure of agent queries when multiple tool calls are involved #225

Closed nikessel closed 1 month ago

nikessel commented 2 months ago

Describe the bug I keep running in into this error when using the openai agents:

> cargo run
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.09s
     Running `target/debug/t`
Favorite food tool is called
Date tool is called
thread 'main' panicked at src/main.rs:77:19:
Error invoking LLMChain: AgentError("Error in agent planning: Chain error: LLM error: OpenAI error: invalid_request_error: Invalid parameter: messages with role 'tool' must be a response to a preceeding message with 'tool_calls'. (param: messages.[5].role)")
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

And sometimes it works:

[I] niklas@nslaptop /t/t (prod)
> cargo run
   Compiling t v0.1.0 (/tmp/t)
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 2.09s
     Running `target/debug/t`
Date tool is called
Favorite food tool is called
Result: "- The name of the current directory is `/tmp/t`. - Today's date is the 25th of November, 2025. - The favorite food is Pizza."

To Reproduce I've set up a minimal project based on this example: https://github.com/Abraxas-365/langchain-rust/blob/main/examples/open_ai_tools_agent.rs

Cargo.toml:

[package]
name = "t"
version = "0.1.0"
edition = "2021"

[dependencies]
async-trait = "0.1.82"
dotenvy = "0.15.7"
langchain-rust = "4.4.2"
serde_json = "1.0.128"
tokio = { version = "1.40.0", features = ["full"] }

main.rs

use std::{error::Error, sync::Arc};

use async_trait::async_trait;
use langchain_rust::{
    agent::{AgentExecutor, OpenAiToolAgentBuilder},
    chain::{options::ChainCallOptions, Chain},
    llm::openai::OpenAI,
    memory::SimpleMemory,
    prompt_args,
    tools::{CommandExecutor, DuckDuckGoSearchResults, SerpApi, Tool},
};
use serde_json::Value;

struct Date {}
struct FavoriteFood {}

#[async_trait]
impl Tool for Date {
    fn name(&self) -> String {
        "Date".to_string()
    }
    fn description(&self) -> String {
        "Useful when you need to get the date,input is  a query".to_string()
    }
    async fn run(&self, _input: Value) -> Result<String, Box<dyn Error>> {
        println!("Date tool is called");
        Ok("25  of november of 2025".to_string())
    }
}

#[async_trait]
impl Tool for FavoriteFood {
    fn name(&self) -> String {
        "favorite_food".to_string()
    }
    fn description(&self) -> String {
        "Returns the favorite food".to_string()
    }
    async fn run(&self, _input: Value) -> Result<String, Box<dyn Error>> {
        println!("Favorite food tool is called");
        Ok("Pizza".to_string())
    }
}

#[tokio::main]
async fn main() {
    dotenvy::dotenv().ok();
    let llm = OpenAI::default();
    let memory = SimpleMemory::new();
    let serpapi_tool = SerpApi::default();
    let duckduckgo_tool = DuckDuckGoSearchResults::default();
    let tool_calc = Date {};
    let farvorite_food = FavoriteFood {};
    let command_executor = CommandExecutor::default();
    let agent = OpenAiToolAgentBuilder::new()
        .tools(&[
            Arc::new(serpapi_tool),
            Arc::new(tool_calc),
            Arc::new(command_executor),
            Arc::new(duckduckgo_tool),
            Arc::new(farvorite_food),
        ])
        .options(ChainCallOptions::new().with_max_tokens(1000))
        .build(llm)
        .unwrap();

    let executor = AgentExecutor::from_agent(agent).with_memory(memory.into());

    let input_variables = prompt_args! {
        "input" => "What the name of the current dir, And what date is today, and what is the favorite food?",
    };

    match executor.invoke(input_variables).await {
        Ok(result) => {
            println!("Result: {:?}", result.replace("\n", " "));
        }
        Err(e) => panic!("Error invoking LLMChain: {:?}", e),
    }
}
prabirshrestha commented 2 months ago

Are you using open ai via local llm or open ai? I have noticed that using local llm, it is not good based on the model.

At some point it might be worth using function calling directly instead for tools for better reliability given that even local llms are good at this now.

nikessel commented 2 months ago

I'm using the openai API, mostly gpt4o mini for testing, but it has also happened several times using gpt4-o

chirino commented 1 month ago

I'm seeing the same thing. Also testing against OpenAI gpt-4o

prabirshrestha commented 1 month ago

Please try the latest main branch. I was able to consistently get the output correctly with local llm with qwen2.5 but not with llama3.2. Do make sure the model you choose is good with function calling.

Thanks to @chirino for the fix!