Closed smileBeda closed 1 month ago
Hi @smileBeda , thanks for the feedback!
Originally the 'openai' provider was intended for any openai api compatible service or local llm like Ollama. However, recently when adding support for embedding generation - I realized Ollama doesn't support the openai embeddings api (yet), so added a new provider specific to Ollama.
So it's expected right now, but I agree it's kind of weird. I think what I'll do is add a new ollama provider for text models as well so it's more consistent. It might just be a proxy or alias to the openai one for now, but would let us make ollama-specific changes more easily if needed. E.g. I kind of want a setting to automatically pull models if needed.
This is now supported in 0.3.2
In the doc I see we are expected to pass local ollama models like so:
is that expected? I mean, indeed if I do this:
Then it errors out with
Unsupported AI provider: ollama
. And it works if I useopenai
I am just confused as of is this expected, and if, will it change? It seems weird to useopenai
as provider of a local model loaded in/with ollama