Open DBairdME opened 1 year ago
Hi i'm the maintainer of LiteLLM - how can I help with this ?
I'm keen to see what changes would be needed to allow calls to either LiteLLM or LiteLLM-Proxy in place of the current code that looks at either the OpenAI or Azure OpenAI LLMs. This type of change could include simplifying the UI such that the LLM choice is determined by LiteLLM; alternatively LiteLLM could be queried for the current LLMs that are configured for use and the chatbot UI then showing this list for the user to select the LLM they wish to use
@DBairdME i'll have a tutorial for this today
Hey @DBairdME
I believe this is what you need to do, assuming you're running it locally:
Clone the repo
git clone https://github.com/dotneet/smart-chatbot-ui.git
Install Dependencies
npm i
Create your env
cp .env.local.example .env.local
Set the API Key and Base
OPENAI_API_KEY="my-fake-key"
OPENAI_API_HOST="http://0.0.0.0:8000
Let me know if this works for you @DBairdME
Otherwise happy to hop on a 30min call and get this working for you - https://calendly.com/kdholakia
Hi Krish, thanks for those notes. I see that if the LiteLLM-Proxy\main.py file is amended to use /v1/ as a prefix in the proxy config I can use the proxy for communicating with the LLM (without the /v1/ prefix the proxy isn't able to respond correctly). Interestingly if you choose a new chat within the chatbot, the call to /v1/models gets stuck and the app is not able to take any user inputs
Hey @DBairdME can you explain that a bit more? what's the error you're seeing?
We have support for both v1/chat/completions and /chat/completions
Here's all the available endpoints
Hi. OK, I've redeployed the proxy using the litellm repo (rather than the litellm-proxy repo) and this addresses the /v1/ prefix issues.
Smartchatbot's call to /v1/models returns a 404 as shown when selecting a 'New Chat' within the chatbot INFO: 172.19.8.XXX:53072 - "GET /v1/models HTTP/1.1" 404 Not Found
Great. @DBairdME how did you find the litellm-proxy repo? it should route to litellm
Looks like we're missing the v1/ for models. I'll add it now.
Hi, it came up when just searching for litellm-proxy. At the moment the Git repository for it is the top search result returned by Google.
Change pushed @DBairdME, should be part of v0.10.2
Would love to give a shoutout when we announce this on our changelog. Do you have a twitter/linkedin?
Hi. Loving the development of the chatbot. Has any thought been given to integrating the front end to litellm or litellm-proxy to provide abstraction of the LLM being used? With the rapid development and availability of LLM models, having the front end such as with smart-chatbot-ui able to leverage more LLMs (including those that may be locally hosted) would be a great development.