justyns / silverbullet-ai

Plug for SilverBullet to integrate LLM functionality
https://ai.silverbullet.md/
GNU Affero General Public License v3.0
26 stars 1 forks source link

Is it expected we need to pass `openai` as provider of `ollama` models? #57

Closed smileBeda closed 1 month ago

smileBeda commented 1 month ago

In the doc I see we are expected to pass local ollama models like so:

- name: mistral-nemo
    modelName: mistral-nemo
    provider: openai
    baseUrl: http://localhost:11434/v1
    requireAuth: false

is that expected? I mean, indeed if I do this:

- name: mistral-nemo
    modelName: mistral-nemo
    provider: ollama
    baseUrl: http://localhost:11434/v1
    requireAuth: false

Then it errors out with Unsupported AI provider: ollama. And it works if I use openai I am just confused as of is this expected, and if, will it change? It seems weird to use openai as provider of a local model loaded in/with ollama

justyns commented 1 month ago

Hi @smileBeda , thanks for the feedback!

Originally the 'openai' provider was intended for any openai api compatible service or local llm like Ollama. However, recently when adding support for embedding generation - I realized Ollama doesn't support the openai embeddings api (yet), so added a new provider specific to Ollama.

So it's expected right now, but I agree it's kind of weird. I think what I'll do is add a new ollama provider for text models as well so it's more consistent. It might just be a proxy or alias to the openai one for now, but would let us make ollama-specific changes more easily if needed. E.g. I kind of want a setting to automatically pull models if needed.

justyns commented 1 month ago

This is now supported in 0.3.2