ksylvest / omniai

OmniAI standardizes the APIs for multiple AI providers like OpenAI's Chat GPT, Mistral's LeChat, Claude's Anthropic and Google's Gemini.
https://omniai.ksylvest.com/
MIT License
44 stars 1 forks source link

Feature Request: How about LocalAI / Ollama #21

Closed MadBomber closed 3 months ago

MadBomber commented 3 months ago

Thank you very much for this capability. Its going to make my aia CLI tool so much nicer.

In order for me to dump all of my current dependency on other back-end CLI programs I need to be able to access LocalAI and Ollama APIs.

ksylvest commented 3 months ago

Hi @MadBomber thanks for the feedback! Just confirming - are you using the following to run Ollama locally?

brew install ollama
ollama serve

e.g. Meta Llama

ollama pull llama3:latest

If so, I believe that Ollama has compatibility with OpenAI API. This may work without any changes for you:

require 'omniai/openai'

client = OmniAI::OpenAI::Client.new(host: 'http://127.0.0.1:11434', api_key: nil)
completion = client.chat('Tell me a joke.', model: 'llama3')
completion.choice.message.content # 'Why couldn't the bicycle stand up by itself? Because it was two-tired!'

The host can be configured globally with:

OmniAI::OpenAI.config do |configuration|
  configuration.host = 'http://127.0.0.1:11434'
end

If so, it probably warrants some better documentation for usage w/ ollama (and maybe a provider that just inherits from OpenAI or includes the subset of needed APIs?)

ksylvest commented 3 months ago

Just following up here, but some documentation / fixes have been merged that outlines the usage of both LocalAI and Ollama:

https://github.com/ksylvest/omniai#usage-with-localai https://github.com/ksylvest/omniai#usage-with-ollama

Going to close for now, but feel free to re-open if you find any issues. Make sure you have at least v1.2.2 of omniai and v1.2.1 of omniai-openai installed:

gem update omniai 
gem update omniai-openai
MadBomber commented 3 months ago

Yep, you are right. I've forked your gem. One of the things that I'm looking at is removing the need to explicitly state a back-end client given that the model name has already been specified. I would also like to see the client interface within the OmniAI namespace as consistent as possible across all clients.

For Ollama, I would make that a OmniAI class that is a sub-class of the OmniAI OpenAI class. I am planning on actually reviewing the entire code base this weekend.

ksylvest commented 3 months ago

I do something similar in another project. The library doesn't currently have an exhaustive list of models for each provider. The approach I've taken is to instantiate the client based on a regex mapping of known names:

case model
when /claude/ then OmniAI::Anthropic::Client.new
when /gemini/ then OmniAI::Google::Client.new
when /mistral/ then OmniAI::Mistral::Client.new
else OmniAI::OpenAI::Client.new
end

The above snippet probably needs a extra cases and is not exhaustive (e.g. Mistral has codestral-latest).

A challenge I'd try to avoid in having the client lookup per model require that every client provider is installed. For example, a user that only cares only about Google and OpenAI need only install the omniai-google and omniai-openai gems.

ksylvest commented 2 months ago

@MadBomber I know this issue is closed, but wanted to give a heads up that I've been working on including some basic CLI support. As part of that, the latest version (v1.4.1) of OmniAI::Client now supports a method to lookup a provider dynamically by name:

https://github.com/ksylvest/omniai/blob/main/lib/omniai/client.rb#L75-L91

OmniAI::Client.find(provider: "anthropic")
OmniAI::Client.find(provider: "google")
OmniAI::Client.find(provider: "mistral")
OmniAI::Client.find(provider: "openai")

Hopefully this helps if you can just provide a model -> provider mapping.

MadBomber commented 2 months ago

It would also be interesting if within the provider's API or developer websites there might be a way to "download" the list of model names and their attributes.

ksylvest commented 2 months ago

I think that's a great idea. I think some of the LLM providers support that API:

https://platform.openai.com/docs/api-reference/models/list https://docs.mistral.ai/api/#operation/listModels https://ai.google.dev/api/rest/v1beta/models/list

The only one missing seems to be Claude (Anthropic). For that use case though, I'd say it'd be fairly straightforward to support a coded list using this:

https://docs.anthropic.com/en/docs/about-claude/models