Open ArMaTeC opened 2 months ago
Hi all,
Is there any way to add a error trap for the LLM response
Here is 2 examples
Sometimes when I talk to the assistant I get this spam:
but the most important and wanted one would be a way to retrigger the assistant when it asks
and so on
This is indeed quite annoying if ChatGPT guesses entity_ids etc. Then the function breaks and you get this. Then you need to close the chat and start over.
Hi all,
Is there any way to add a error trap for the LLM response
Here is 2 examples
Sometimes when I talk to the assistant I get this spam:
but the most important and wanted one would be a way to retrigger the assistant when it asks
and so on