Open shiribailem opened 8 months ago
I like this, but I'd need your help to further define the categories of Meta-AI managers.
There is mention of reject
, auto-retry
, etc.: what's the category of those?
Or it this best served with "switches" on a per-Persona basis, which give access to "Tools" (in the OpenAI definition of tools/functions/plugins), which would include Image generation, Browsing and alike?
This would probably be best as switches, though it might not be wholly persona or universal (as it would probably depend on specific contexts, ie. the "no internet access" one would be stuffed into every ReAct).
In that example case, it would be a matter of asking the AI if that message talks about it not having internet access (on a request that explicitly has access). So if that happens it would automatically retry the request, if it fails again maybe try to automatically retune the user's prompt or system message to bypass the message.
I also mentioned in Memory Manager as that would be another example of this, in which it would automatically store and recall bits of data about the conversation.
The whole idea is just a broad concept of hooks for these kinds of tools.
I believe this request should be treated universally with "post-processing" frameworks. See also #78.
Requirements:
@shiribailem sounds good?
See also #320
Right now the behavior options really are just regular chat, and ReAct, but no broader AI efforts.
My suggestion here is a suite of configurable (mostly on/off and which model to use) AI managers that act on responses (separately from ReAct), also sometimes on whole conversations.
A prime example of this might be filtering for certain responses to reject them, auto-retry, or auto-tune. For instance when using ReAct and you have a browser set up, it could detect when the model spits out a "I don't have access to the internet" response and (metaphorically) kick it into working properly.
There might also be things like a "fact checking" routine, where it might do passes to verify if the response is a hallucination.