jeremychone / rust-genai

Rust multiprovider generative AI client (Ollama, OpenAi, Anthropic, Groq, Gemini, Cohere, ...)
Apache License 2.0
208 stars 46 forks source link

What will `genai` do if there are same model names but in different LLM adapters? #31

Open InAnYan opened 3 weeks ago

InAnYan commented 3 weeks ago

Bug description

In other LLM libraries you have a type for a specific adapter, and then you specify model name as a string.

However, in genai there is only one parameter - model name.

While in practice it's probably not a big issue, but in my opinion this is a bit wrong.

The main question that came in my mind is: what if both company X and Y release a model under the same name A?

You know, there two OpenAI's: one from OpenAI company, other is Azure...

The other very important thing:

Two main applications to load LLM models locally are: ollama and GPT4All. It's certainly possible to have same model names in both ollama and GPT4All.

I know you don't support GPT4All now, but I noticed in the algorithm for AdapterKind::from_model that if model name doesn't fit into any criteria, then it will be treated as a model from ollama. This might introduce problems if you add GPT4All

jeremychone commented 3 weeks ago

Good point. So, here is the current thinking:

  1. We have the ModelIden, which includes both AdapterKind and the model name.
  2. I might add an openai::model_name to have a serialized way to force an adapter kind for a model string (I’m trying to keep the model as &str in exec_chat for simplicity).
  3. There’s a ModelMapper to map a ModelIden to another. This provides a lot of flexibility (building on point 2).
  4. I’m also planning to have an EndpointResolver to resolve a function that provides an endpoint URL. The structure for this is yet to be decided.
  5. I agree that the fallback to the Ollama adapter isn’t ideal, but it's functional for now. I may allow configuration of the fallback AdapterKind at the client level.