Open zvictor opened 1 week ago
The current workaround is to throw the error up and then extract info from the error message.
if (previousError) {
if (previousError.toolName) {
history.push({
role: 'assistant',
content: `Call '${previousError.toolName}' with arguments: ${previousError.toolArgs || {}}`
})
}
history.push({ role: 'user', content: previousError.message })
history.push({ role: 'user', content: 'Do not apologize for errors' })
}
That seems to work, but is brittle and loses the power of the standardized messages. And it only solves one specific use case (error retry), leaving any other issue related to observability still open.
I am sorry, I closed the issue by accident 🤗
Feature Description
While porting gemini-chatbot to the Anthropic's provider, I faced some unexpected challenges related to tool/function calling.
First and foremost, it's not possible to gain any visibility over the tool calling process because all the parsing + validation + error handling of messages received from the provider happen opaquely inside AI SDK. The
tool-call
message can't be accessed in any way if it's invalid.Due to 1️⃣, one is incapable of performing any post-operation using the data from the
tool-call
message, such as requesting the LLM to fix the broken tool calling.Instead, ideally, it should be possible to:
Activate logging or event triggering in order to listen to (and ideally modify) problematic messages before they throw errors that terminate the request.
Take action on the invalid message, such as: send a new request to the LLM containing the tool-call message generated along side the
AI_InvalidToolArgumentsError
,AI_JSONParseError
, or whatever error message it caused, requesting the LLM for a fix.Use Case
A developer receives too many
AI_InvalidToolArgumentsError
errors and would like to retry the failing requests while appending the tool-call message generated along side the error message.