Open oalexdoda opened 1 week ago
I suspect this is the standard behavior of the LLM when the disabled tools are not available. If you mention the tools in e.g. the system prompt, it's up to you to change that as well.
I want to prevent injecting any textual information into the chat as much as we can, since this is up to the user.
Understood, thank you @lgrammel!
What would be really useful (as a suggestion) is if there was, perhaps a separate prop, called experimental_guardrails
where we could build/have some pre-defined filters to avoid stuff that LLMs fail to address.
Similar to how Azure allows you to add content filters, or to how platforms like Portkey have guardrails.
Description
There's a bug where although
experimental_activeTools
properly filters through the active/inactive tools, it causes the LLM to think it can do something that it can't anymore.Instead of the agent saying "I can't do that anymore", they act as if they do & invent responses.
This is likely something that needs to be done better on the LLM side, but perhaps there could be a built-in AI SDK filter that informs the LLM the tool is no longer available if it isn't in activeTools, but it exists in the message history.
Code example
Additional context