blazzbyte / OpenInterpreterUI

Simplify code execution with Open Interpreter UI Project with Streamlit. A user-friendly GUI for Python, JavaScript, and more. Pay-as-you-go, no subscriptions. Ideal for beginners.
MIT License
182 stars 79 forks source link

Missing instructions - No input text chatbox #14

Closed HyperUpscale closed 6 months ago

HyperUpscale commented 6 months ago

Sorry for the questions, but maybe I am missing something

I followed everything written, and I cant make it usable: image

WHERE IS THE CHAT? There is something missing in the instructions IMO image

Can you provide any advice what may be missing or what do I need to install or run...?

blazzbyte commented 6 months ago

Hi @HyperUpscale, to use any model, start by selecting the model provider in Your API Server (Local, OpenAI, Vertex, etc.). Fill in all required fields in the settings. Once done, hit save changes to activate the input chatbox, allowing message sending.

For example, if you're using a local model, as shown in the image, fill in API Base Link field in settings. This field represents the port where your locally deployed model is running. After filling it, remember to save changes, and that will enable the "input chatbox".

Similarly, for providers as a service, like OpenAI, you only need put your API Key.

I'll keep the issue open in case you have any further questions.

HyperUpscale commented 6 months ago

Ok... Thank you for the feedback. This is what I wanted to know. That means I need to use the workaround that works for anything project that uses liteLLM.

Just to let you know, using WSLs and Dockers ... somehow "localhost" and "127.0.0.1" are not working on my setup (but host.docker.internal helps, but with many issues) and even the LiteLLM creator doesn't know how to fix that. So you can close it as the issue is related to the LLM router.