Ollama now supports using the OpenAI API. This change allows the user to choose the base url used by the OpenAI library, allowing the user to use a local or self-hosted model. This can be used by updating the openaipirc config file like (key must be a non-empty string):
[openai]
secret_key = llamasarepeopletoo
model = mistral
base_url = http://localhost:11434/v1
Ollama now supports using the OpenAI API. This change allows the user to choose the base url used by the OpenAI library, allowing the user to use a local or self-hosted model. This can be used by updating the
openaipirc
config file like (key must be a non-empty string):