joaomdmoura / crewAI

Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
https://crewai.com
MIT License
16.87k stars 2.28k forks source link

Couldn't use Ollama along with CrewAI agents [Reason: Stop Sequence] #641

Open Nanthagopal-Eswaran opened 1 month ago

Nanthagopal-Eswaran commented 1 month ago

Hi,

I recently went through Multi AI Agent Systems with crewAI short course in DeepLearning.AI platform. I was trying to use the "Content Writer Crew" example in local using Ollama/llama3.

I noticed that crew was executing infinitely and I had to interrupt the kernel to stop.

I set up litellm proxy and saw the request that is going in. I tried to send the same request directly using llm and found the issue. Request: image Response: image

If we don't give the stop sequence "\nObservation", it stop at the above default stop sequences. But when we add the crewai stop sequence, it simply continues to generate.

I don't know think this stop sequence is general across all models/providers and also there is no mention about this in the prompt.

Could you check this enable support for Ollama also?

jcoombes commented 1 month ago

As a stopgap, can this be fixed by altering the stop sequence expected by the ollama MODELFILE?

Nanthagopal-Eswaran commented 1 month ago

Thanks @jcoombes for quick response.

As a temporary solution, I made below modification and able to execute the crew. But this only works for ollama/llama3 model now.

image

Nanthagopal-Eswaran commented 1 month ago

I get that this is an unexpected issue from ollama where it should have handled both requested stop sequence as well as default stop sequence. I also raised an issue there - https://github.com/ollama/ollama/issues/4524

Here in our case, I couldn't get why do we need this stop sequence "\nObservation" when there is no mention about this in the prompt.

wgong commented 4 days ago

As a stopgap, can this be fixed by altering the stop sequence expected by the ollama MODELFILE?

I did not create custom crewai ModelFile initially, it seems running for long time, then I followed https://docs.crewai.com/how-to/LLM-Connections/#setting-up-ollama to create a custom model crewai-llama3 , then it ran fine,

does PARAMETER stop Result mean that crewai will stop once getting result? Thanks