FarisHijazi / localCopilot

A self-hosted github copilot guide using oobabooga webui
152 stars 11 forks source link

Ollama support #10

Open shouryan01 opened 8 months ago

shouryan01 commented 8 months ago

Ollama is a very popular backend for running local models with a large library of supported models. It would be great to see ollama support

FarisHijazi commented 8 months ago

Does it expose an openai endpoint? If it does, then support should be simple

shouryan01 commented 8 months ago

It doesn't, however can make use of LiteLLM to wrap the endpoint to make it openai compatible

FarisHijazi commented 7 months ago

good idea, will work on it soon

been super busy last few months, I'd appreciate if you could try things out and lemme know if you face any issues, it should be simple like just pointing the middleware to the Ollama/litellm openai port

Mayorc1978 commented 7 months ago

It doesn't, however can make use of LiteLLM to wrap the endpoint to make it openai compatible

So it's possible to run ollama in docker, and let's say it exposes the usual localhost:11434 port, then using LiteLLM you can convert that port exposure to a OpenAI endpoint ? Or do you need to run the Ollama model right from LiteLLM?

shouryan01 commented 7 months ago

It doesn't, however can make use of LiteLLM to wrap the endpoint to make it openai compatible

So it's possible to run ollama in docker, and let's say it exposes the usual localhost:11434 port, then using LiteLLM you can convert that port exposure to a OpenAI endpoint ? Or do you need to run the Ollama model right from LiteLLM?

Looks like LiteLLM provides a docker image

FarisHijazi commented 2 months ago

I'll try to get on it soon after fixing the authentication