Closed nikessel closed 3 weeks ago
Are you using open ai via local llm or open ai? I have noticed that using local llm, it is not good based on the model.
At some point it might be worth using function calling directly instead for tools for better reliability given that even local llms are good at this now.
I'm using the openai API, mostly gpt4o mini for testing, but it has also happened several times using gpt4-o
I'm seeing the same thing. Also testing against OpenAI gpt-4o
Please try the latest main branch. I was able to consistently get the output correctly with local llm with qwen2.5 but not with llama3.2. Do make sure the model you choose is good with function calling.
Thanks to @chirino for the fix!
Describe the bug I keep running in into this error when using the openai agents:
And sometimes it works:
To Reproduce I've set up a minimal project based on this example: https://github.com/Abraxas-365/langchain-rust/blob/main/examples/open_ai_tools_agent.rs
Cargo.toml:
main.rs