Closed HyperUpscale closed 6 months ago
Hi @HyperUpscale, to use any model, start by selecting the model provider in Your API Server (Local, OpenAI, Vertex, etc.). Fill in all required fields in the settings. Once done, hit save changes to activate the input chatbox, allowing message sending.
For example, if you're using a local model, as shown in the image, fill in API Base Link field in settings. This field represents the port where your locally deployed model is running. After filling it, remember to save changes, and that will enable the "input chatbox".
Similarly, for providers as a service, like OpenAI, you only need put your API Key.
I'll keep the issue open in case you have any further questions.
Ok... Thank you for the feedback. This is what I wanted to know. That means I need to use the workaround that works for anything project that uses liteLLM.
Just to let you know, using WSLs and Dockers ... somehow "localhost" and "127.0.0.1" are not working on my setup (but host.docker.internal helps, but with many issues) and even the LiteLLM creator doesn't know how to fix that. So you can close it as the issue is related to the LLM router.
Sorry for the questions, but maybe I am missing something
I followed everything written, and I cant make it usable:
WHERE IS THE CHAT? There is something missing in the instructions IMO
Can you provide any advice what may be missing or what do I need to install or run...?