ollama / ollama

Get up and running with Llama 3, Mistral, Gemma, and other large language models.
https://ollama.com
MIT License
70.72k stars 5.19k forks source link

Add support for third-party hosted APIs #4440

Open 19h opened 2 weeks ago

19h commented 2 weeks ago

We've been coding against the Ollama API internally and eventually it hit me .. Ollama should be able to support third-party API providers, making it a de-facto gateway to LLMs.

For example, it would easily blur the lines between an OpenAI's assistant / user and a Gemini model / user conversation; it could transparently speak Cohere Command R+ completion-like while eloquently talking to Claude, too.

Might sound utterly off-topic, but think about it.

I implemented a hard-coded model into Ollama for local use so I can use unsupported, hosted LLMs in Cody for coding, and I feel like this could very well be a Modelfile-level thing with providers happily providing integrations, putting even more spotlight on Ollama while forcing LLM providers to be less fuzzy about their API integrations, given that the Modelfile spec is rigid enough.

ProjectMoon commented 2 weeks ago

Isn't this something that LiteLLM can do?

oldmanjk commented 2 weeks ago

Isn't this something that LiteLLM can do?

It is, but combining efforts could be a good thing. This space needs more standardization, IMO