Closed aryanbhasin closed 2 weeks ago
You can disable parallel tool calls, see https://sdk.vercel.ai/providers/ai-sdk-providers/openai#chat-models
You can disable parallel tool calls, see https://sdk.vercel.ai/providers/ai-sdk-providers/openai#chat-models
Ah amazing, thank you. I looked for that flag in generateText
but makes sense to configure it in the model provider itself.
Description
When calling
generateText
with a newer OpenAI model likegpt-4o
and one or more tools, the model - somewhat sporadically - throws the following error:Longer stack --
This is possibly an issue with the underling model itself, see the entire discussion here on the OpenAI forum. Some solutions explored on the forum were -
multi_tool_use.parallel
, and either abandoning the request or trying it againfunctions.
On the AI SDK's behalf, a possible remedy would be supporting the
parallel_tool_calls: false
flag which is currently not an option undergenerateText
. Read more on that flag hereCode example
Additional context
This error happens unpredictably - sometimes the tool gets called successfully and sometimes the model throws this error. The error gets thrown even if we pass one tool, which suggests there are probably other tools (like web search) that OpenAI puts under the
multi_tool_use
wrapper