Open Nanthagopal-Eswaran opened 1 month ago
As a stopgap, can this be fixed by altering the stop sequence expected by the ollama MODELFILE?
Thanks @jcoombes for quick response.
As a temporary solution, I made below modification and able to execute the crew. But this only works for ollama/llama3 model now.
I get that this is an unexpected issue from ollama where it should have handled both requested stop sequence as well as default stop sequence. I also raised an issue there - https://github.com/ollama/ollama/issues/4524
Here in our case, I couldn't get why do we need this stop sequence "\nObservation" when there is no mention about this in the prompt.
As a stopgap, can this be fixed by altering the stop sequence expected by the ollama MODELFILE?
I did not create custom crewai ModelFile initially, it seems running for long time, then I followed https://docs.crewai.com/how-to/LLM-Connections/#setting-up-ollama to create a custom model crewai-llama3
, then it ran fine,
does PARAMETER stop Result
mean that crewai will stop once getting result?
Thanks
Hi,
I recently went through Multi AI Agent Systems with crewAI short course in DeepLearning.AI platform. I was trying to use the "Content Writer Crew" example in local using Ollama/llama3.
I noticed that crew was executing infinitely and I had to interrupt the kernel to stop.
I set up litellm proxy and saw the request that is going in. I tried to send the same request directly using llm and found the issue. Request:
Response:
![image](https://github.com/joaomdmoura/crewAI/assets/115451020/e1affe1a-9821-4410-8560-e1b46be7db94)
If we don't give the stop sequence "\nObservation", it stop at the above default stop sequences. But when we add the crewai stop sequence, it simply continues to generate.
I don't know think this stop sequence is general across all models/providers and also there is no mention about this in the prompt.
Could you check this enable support for Ollama also?