Closed hungdtrn closed 8 months ago
please keep the pre-commit
related changes out of this PR. It isn't related to the LiteLLM changes.
please keep the
pre-commit
related changes out of this PR. It isn't related to the LiteLLM changes.
Thanks for reminding me, I have reverted back to the previous commit
One final nitpick from me and it's good to go. @nqngo wanna give a gloss over?
@phattantran1997 Please have a look at [llm_assistant/ollama/README.md] to have a better understanding on how the Litellm proxy server work.
Initially, I considered using litellm to replace the OpenAI SDK for interacting with both ChatGPT and Ollama. However, as suggested by @phattantran1997 , we can improve upon this approach.
Let’s create a proxy server that wraps around both ChatGPT and Ollama. By doing this, we can reuse the OpenAI SDK to interact with this proxy server instead of the base OpenAI server. This approach is better because it allows us to integrate our Ollama with existing OpenAI applications without requiring any code changes
As about Docker, we need to setup the proxy server inside the Ollama container, and interact with this proxy server instead. @samhwang @nqngo