Closed andresperezrobinson closed 2 weeks ago
What happens with a different model such as gpt-4 mini? or different Ollama one?
What happens with a different model such as gpt-4 mini? or different Ollama one?
I haven't tried other models, but looking online and at previous issues raised, it appears that there is no issue using gpt, and issues come from using other models. I was hoping someone had a different while using something else than gpt.
What happens with a different model such as gpt-4 mini? or different Ollama one?
I haven't tried other models, but looking online and at previous issues raised, it appears that there is no issue using gpt, and issues come from using other models. I was hoping someone had a different while using something else than gpt.
Try using more powerful models eg Llama 3.1 70B on Groq to see if you get more consistent output. From my experience Ollama model powered agents need a lot of massaging and prompt engineering to be accurate and consistent.
What happens with a different model such as gpt-4 mini? or different Ollama one?
I haven't tried other models, but looking online and at previous issues raised, it appears that there is no issue using gpt, and issues come from using other models. I was hoping someone had a different while using something else than gpt.
Try using more powerful models eg Llama 3.1 70B on Groq to see if you get more consistent output. From my experience Ollama model powered agents need a lot of massaging and prompt engineering to be accurate and consistent.
I understand this model might not be as powerful, however, I would expect the JSON output to be tackled by the crewai framework, and not dependent on the model's power.
Description: Hello, I'm trying to get a consistent JSON output from agents by setting a pydantic model with fields to fill in. Although the content of my output is correct, the JSON output is not working as expected. I've summarized my code and "unwanted" results below:
Unwanted Output:
Steps Taken to Resolve:
Issue: Is there any idea why this is happening? Could it be specific to Llama3? What recommendations are there for resolving this?