Open cvaske opened 8 months ago
The @chatlab.expose_exception_to_llm
decorator fixes this, but perhaps the default should be to expose all exceptions to the LLM, and have a decorator that hides them?
The
@chatlab.expose_exception_to_llm
decorator fixes this, but perhaps the default should be to expose all exceptions to the LLM, and have a decorator that hides them?
Yeah I agree with that.
Description
I wanted to continue a chat even after a function had generated an exception.
What I Did
This generates the exception:
and all future calls to the chat generate a 400 error code from OpenAI:
Ideally, the exception result would be added to the message log, and allow the chat to continue. (And perhaps even allow the model to try a fix for the exception...)