Closed zhjygit closed 3 months ago
Finally, i download my openui and ollama on the physical host like 192.168.1.103,my ollama is running fine at 11434, i have pull llama3、llava models. i don't use docker in the whole process. As above shown, i can run the openui, however, i have no open_api_key,i don't known how to pass the key to the OPEN_API_KEY, as to the ollama, where is the OPEN_API_KEY? Whether does the openui support local models OPEN_API_KEY?
I have oneapi or open-webui in the vmware host like 192.168.1.169, maybe i can pass a api-key from oneapi,but how can i pass the host 192.168.1.169 of oneapi host to the openui?
No need for an API key. Just set OLLAMA_HOST
and choose a model from the settings pane.
I do not have an OpenAI API Key but do have my own ollama instance. If I remove the OPENAI_API_KEY var and set OLLAMA_HOST var to my ollama URL, the container fails to start, complaining about not having the openai_api_key var set or something.
No need for an API key. Just set
OLLAMA_HOST
and choose a model from the settings pane.
no no no, i don't use openui docker, just run the openui docker locally.
If i do not set openai_api_key, python -m openui
will not run with error like :openaierror,the api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable.
I don't know if you have solved your problem already, but it seems similar to this issue. The solution worked for me.
soo, if you unset OPENAI_API_KEY then I get: openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
after setting OLLAMA_HOST to my localhost, I get a choice of models from ollama and can choose it, but then I get lots of errors and a 500 - what is the correct way of running ollama here?
@sokoow you can just set the OPENAI_API_KEY to something like xxx
if you don't want to use that API. If you're seeing Ollama models in the list the application is able to list them. What are the errors you're getting when attempting to sue one of the models? You should see a stacktrace in the terminal where you ran the server.
Indeed, after I've set OPENAI_API_KEY to an empty string and got bunch of errors, after setting it to something else everything works fine. Thanks for reply @vanpelt
I am not getting any models to select after launching via Docker docker run --rm --name openui -p 7878:7878 -e OLLAMA_HOST=http://localhost:11434 ghcr.io/wandb/openui
. Did someone run into this error as well?
P.S. Running Ollama locally with 2 different models downloaded.
Hey Paul, if you're running Ollama on localhost you'll like need to set OLLAMA_HOST=http://host.docker.internal:11434
because docker is running from within a VM that has a different localhost (unless you're running in Linux).
Hey Paul, if you're running Ollama on localhost you'll like need to set
OLLAMA_HOST=http://host.docker.internal:11434
because docker is running from within a VM that has a different localhost (unless you're running in Linux).
Thanks for the tip! Unfortunately, it doesn't work either way... I still can't select any model. Any other suggestions?
Update: Removed Environment Variable OPENAI_API_KEY=xxx and now it works. Thanks for the help :)
Nice! Glad that worked for you.
my openui is in ubuntu18 vmware workstation like 192.168.1.169,my ollama and models is in physical host like 192.168.1.103. how can i use ollama models in openui of vmware workstation.