jekalmin / extended_openai_conversation

Home Assistant custom component of conversation agent. It uses OpenAI to control your devices.
829 stars 108 forks source link

Converting Llama3/Ollama to CrewAI creates an unusable (and angry) LLM #210

Open tholonia opened 1 month ago

tholonia commented 1 month ago

Following your instructions on converting an Ollama model to CrewAI (https://github.com/joaomdmoura/crewAI/blob/main/docs/how-to/LLM-Connections.md) for llama3 I get the following results:

When I loaded the new file and typed "hello" it responded with a >21,000 word reply, and then started yelling at me, with a final "I think this is THE VERY LAST MESSAGE! Goodbye!"

In contrast, "hello" on the original llama3 pull responded sanely with:

Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat? The model file is (identical to the example, but llama3 instead of llama2.. which is also identical to this article on Medium https://medium.com/@bhavikjikadara/step-by-step-guide-on-how-to-integrate-llama3-with-crewai-d9a49b48dbb2)

# ./crewai_model_overlay.txt
FROM llama3
PARAMETER temperature 0.8
PARAMETER stop Result
SYSTEM """"""

and the code to convert it is (also (identical to the example, but llama3 instead of llama2)


#!/bin/zsh
model_name="llama3"
custom_model_name="crewai-llama3"
ollama pull $model_name
ollama create $custom_model_name -f ./crewai_model_overlay.txt
Is there something I need to add to make it not be crazy? (and changing temperature has no effect)

Is there something different needed for Llama3?