jodendaal / DesktopAI

A desktop application using blazor and electron to allow interfacing with different AI models.
MIT License
6 stars 4 forks source link

Abstractions #4

Open jodendaal opened 1 year ago

jodendaal commented 1 year ago

Design the app so we are able to swap out implementations of providers and mix and match.

E.g Allow use of OpenAI, Llama, Dolly , Hugging face models etc.

Application should be configurable so you may choose provider per feature, i.e use OpenAI for image generation and Llama or some other LLM for text completion for instance.

jodendaal commented 1 year ago

We will do this once we have implemented OpenAI to begin with, keep it simple for everyone to begin with.

ishaan-jaff commented 1 year ago

Hi @jodendaal I’m the maintainer of LiteLLM (abstraction to call 100+ LLMs)- we allow you to create a proxy server to call 100+ LLMs, and I think it can solve your problem (I'd love your feedback if it does not)

Try it here: https://docs.litellm.ai/docs/proxy_server https://github.com/BerriAI/litellm

Using LiteLLM Proxy Server

import openai
openai.api_base = "http://0.0.0.0:8000/" # proxy url
print(openai.ChatCompletion.create(model="test", messages=[{"role":"user", "content":"Hey!"}]))

Creating a proxy server

Ollama models

$ litellm --model ollama/llama2 --api_base http://localhost:11434

Hugging Face Models

$ export HUGGINGFACE_API_KEY=my-api-key #[OPTIONAL]
$ litellm --model claude-instant-1

Anthropic

$ export ANTHROPIC_API_KEY=my-api-key
$ litellm --model claude-instant-1

Palm

$ export PALM_API_KEY=my-palm-key
$ litellm --model palm/chat-bison