Open manuel-84 opened 4 months ago
This is a bug that Ollama does not always return a valid JSON string
There is one solution for this case:
There is one solution for this case:
- retry many times to LLM asking for a "fixed" result
- crash to user
- try to parse a invalid json
Thanks I will try.. also can we edit the agent default prompts to try fix the part where asked to use valid json?
some issue for Chatgpt model
This is an example very similar to the ones in repository that creates and agent that should reply using the provided functions. The problem is that somehow very often the inputs passed to functions are wrong.
Result (see where parameters passed are undefined):