ax-llm / ax

Build LLM powered Agents and "Agentic workflows" based on the Stanford DSP paper.
http://axllm.dev
Apache License 2.0
806 stars 49 forks source link

Function calling (using food-search.ts) #27

Closed beddows closed 2 days ago

beddows commented 2 weeks ago

I'm submitting a ... [ x] bug report

Summary I'm getting errors with:

Anthropic ``` @ax-llm/ax/build/module/src/util/apicall.js:39 throw new Error(`API Error: ${apiUrl.href}, ${e}`); ^ Error: API Error: https://api.anthropic.com/v1/messages, Error: API Error: https://api.anthropic.com/v1/messages, 400, Bad Request { "type": "error", "error": { "type": "invalid_request_error", "message": "Your API request included an `assistant` message in the final position, which would pre-fill the `assistant` response. When using tools, pre-filling the `assistant` response is not supported." } } ``` Also: - Anthropic just released "claude-3-5-sonnet-20240620" 🎉
Groq ``` @ax-llm/ax/build/module/src/ai/openai/api.js:166 ({ delta: { content, role, tool_calls }, finish_reason }) => { ^ TypeError: Cannot read properties of undefined (reading 'content') ``` Also: - ai/groq/index.ts is missing "export * from './types.js';" - is 'AxAxGroqArgs' a typo?
Cohere

``` @ax-llm/ax/build/module/src/util/apicall.js:39 throw new Error(`API Error: ${apiUrl.href}, ${e}`); ^ Error: API Error: https://api.cohere.ai/v1/generate, Error: API Error: https://api.cohere.ai/v1/generate, 400, Bad Request { "message": "invalid request: prompt must be at least 1 token long." } ```

Google

``` @ax-llm/ax/build/module/src/util/apicall.js:39 throw new Error(`API Error: ${apiUrl.href}, ${e}`); ^ Error: API Error: https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro:streamGenerateContent?alt=sse&key=, Error: API Error: https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro:streamGenerateContent?alt=sse&key=, 400, Bad Request { "error": { "code": 400, "message": "Invalid JSON payload received. Unknown name \"default\" at 'tools[0].function_declarations[0].parameters.properties[1].value': Cannot find field.\nInvalid JSON payload received. Unknown name \"default\" at 'tools[0].function_declarations[1].parameters.properties[1].value': Cannot find field.", "status": "INVALID_ARGUMENT", "details": [ { "@type": "type.googleapis.com/google.rpc.BadRequest", "fieldViolations": [ { "field": "tools[0].function_declarations[0].parameters.properties[1].value", "description": "Invalid JSON payload received. Unknown name \"default\" at 'tools[0].function_declarations[0].parameters.properties[1].value': Cannot find field." }, { "field": "tools[0].function_declarations[1].parameters.properties[1].value", "description": "Invalid JSON payload received. Unknown name \"default\" at 'tools[0].function_declarations[1].parameters.properties[1].value': Cannot find field." } ] } ] } } ```

Other observations:

dosco commented 2 weeks ago

will take a look

dosco commented 2 weeks ago

can you confirm you're testing with the latest version we fixed a bunch of stuff today and also added the new 3.5 sonnet model.

beddows commented 2 weeks ago

@dosco Anthropic and Groq still throwing the same errors. I'm having different issues with Cohere and Google, but I need to double check that I'm not missing something.

I like the new AxAI syntax, easier to follow!

dosco commented 2 weeks ago

great will take a look and fix this today. thanks as the api was growing the proper prefix helps with autocomplete etc

dosco commented 2 weeks ago

latest release https://github.com/ax-llm/ax/releases/tag/9.0.9 has fixes for anthropic, cohere, gemini, i'm looking into groq but i suspect its more a model issue there i might bump up the default model choice to a bigger model

beddows commented 2 weeks ago

@dosco Cool, I'll check it out! Regarding Groq, one thing to keep in mind - they limit llama3-70b to 6k tokens/min and llama3-8b to 30k t/m. I found myself hitting the limit with the larger model quite often, within one run.

dosco commented 2 weeks ago

probably need to add a rate limiter we support those in the library maybe groq needs one by default

dosco commented 2 weeks ago

in the latest release i added a default token bucket bases rate limiter to groq by default to slow it down when needed.