Open LaDoger opened 10 months ago
@LaDoger, we will automatically get more LLM models when we integrate llamaindex (#22). The question is, how should we include visual models? We would need a concept for this first.
Hi @marcusschiesser @LaDoger I’m the maintainer of LiteLLM (abstraction to call 100+ LLMs)- we allow you to create a proxy server to call 100+ LLMs, and I think it can solve your problem (I'd love your feedback if it does not)
Try it here: https://docs.litellm.ai/docs/proxy_server https://github.com/BerriAI/litellm
import openai
openai.api_base = "http://0.0.0.0:8000/" # proxy url
print(openai.ChatCompletion.create(model="test", messages=[{"role":"user", "content":"Hey!"}]))
Ollama models
$ litellm --model ollama/llama2 --api_base http://localhost:11434
Hugging Face Models
$ export HUGGINGFACE_API_KEY=my-api-key #[OPTIONAL]
$ litellm --model claude-instant-1
Anthropic
$ export ANTHROPIC_API_KEY=my-api-key
$ litellm --model claude-instant-1
Palm
$ export PALM_API_KEY=my-palm-key
$ litellm --model palm/chat-bison
@ishaan-jaff Looking great, but we won't add Python code for the time being
why not use our proxy for this ? That way you don't need to add any python code
@ishaan-jaff sorry, but that would then add one more deployment. Currently it's just one vercel deployment.
Currently Unc supports:
We'll also wanna support: