Closed hemangjoshi37a closed 3 months ago
Hi:) You can adjust the relevant settings in the model_backend.py, including modeltype, your API_KEY, etc.
If you want to use Ollama in local setup, you can follow my steps:
export OPENAI_API_KEY=ollama # any value
export BASE_URL=http://localhost:11434/v1 # your Ollama API server
model
parameter to your model in Ollama:
https://github.com/OpenBMB/ChatDev/blob/bbb145048ef1a1be727cd4f176b8bafe1a5f15db/camel/model_backend.py#L100
Example:
response = client.chat.completions.create(*args, **kwargs, model="gemma:2b-instruct",
**self.model_config_dict)
python3 run.py --task "[description_of_your_idea]" --name "[project_name]"
@thinh9e ok thanks. but this should be added to the readme file so that it is accessible to anyone.
Please anyone who has write access to this repo please provide any documentation or anything on how to replace openAI models with ollama models . thanks .