dotneet / smart-chatbot-ui

An open source ChatGPT UI.
https://smart-chatbot-ui.vercel.app/
MIT License
475 stars 98 forks source link

Feature Request: Integration with LiteLLM / LiteLLM-Proxy #173

Open DBairdME opened 1 year ago

DBairdME commented 1 year ago

Hi. Loving the development of the chatbot. Has any thought been given to integrating the front end to litellm or litellm-proxy to provide abstraction of the LLM being used? With the rapid development and availability of LLM models, having the front end such as with smart-chatbot-ui able to leverage more LLMs (including those that may be locally hosted) would be a great development.

ishaan-jaff commented 1 year ago

Hi i'm the maintainer of LiteLLM - how can I help with this ?

DBairdME commented 1 year ago

I'm keen to see what changes would be needed to allow calls to either LiteLLM or LiteLLM-Proxy in place of the current code that looks at either the OpenAI or Azure OpenAI LLMs. This type of change could include simplifying the UI such that the LLM choice is determined by LiteLLM; alternatively LiteLLM could be queried for the current LLMs that are configured for use and the chatbot UI then showing this list for the user to select the LLM they wish to use

krrishdholakia commented 1 year ago

@DBairdME i'll have a tutorial for this today

krrishdholakia commented 1 year ago

Hey @DBairdME

I believe this is what you need to do, assuming you're running it locally:

  1. Clone the repo

    git clone https://github.com/dotneet/smart-chatbot-ui.git
  2. Install Dependencies

    npm i
  3. Create your env

    cp .env.local.example .env.local
  4. Set the API Key and Base

    OPENAI_API_KEY="my-fake-key"
    OPENAI_API_HOST="http://0.0.0.0:8000
krrishdholakia commented 1 year ago

Let me know if this works for you @DBairdME

Otherwise happy to hop on a 30min call and get this working for you - https://calendly.com/kdholakia

DBairdME commented 1 year ago

Hi Krish, thanks for those notes. I see that if the LiteLLM-Proxy\main.py file is amended to use /v1/ as a prefix in the proxy config I can use the proxy for communicating with the LLM (without the /v1/ prefix the proxy isn't able to respond correctly). Interestingly if you choose a new chat within the chatbot, the call to /v1/models gets stuck and the app is not able to take any user inputs

krrishdholakia commented 1 year ago

Hey @DBairdME can you explain that a bit more? what's the error you're seeing?

We have support for both v1/chat/completions and /chat/completions

Here's all the available endpoints

Screenshot 2023-10-18 at 7 25 39 PM
DBairdME commented 1 year ago

Hi. OK, I've redeployed the proxy using the litellm repo (rather than the litellm-proxy repo) and this addresses the /v1/ prefix issues.

Smartchatbot's call to /v1/models returns a 404 as shown when selecting a 'New Chat' within the chatbot INFO: 172.19.8.XXX:53072 - "GET /v1/models HTTP/1.1" 404 Not Found

krrishdholakia commented 1 year ago

Great. @DBairdME how did you find the litellm-proxy repo? it should route to litellm

Looks like we're missing the v1/ for models. I'll add it now.

DBairdME commented 1 year ago

Hi, it came up when just searching for litellm-proxy. At the moment the Git repository for it is the top search result returned by Google.

krrishdholakia commented 1 year ago

Change pushed @DBairdME, should be part of v0.10.2

Would love to give a shoutout when we announce this on our changelog. Do you have a twitter/linkedin?