av / harbor

Effortlessly run LLM backends, APIs, frontends, and services with one command.
https://github.com/av/harbor
Apache License 2.0
564 stars 37 forks source link

Open WebUI doesn't works #1

Closed alsoasnerd closed 4 months ago

alsoasnerd commented 4 months ago

I just runned sudo ./harbor.sh up. OpenWebUI doesn't open image

Output of sudo ./harbor.sh logs: image I pulled gemma2:2b in ollama, but doesn't fix the issue.

Edit: Sorry for my english and thanks for open this project.

av commented 4 months ago

Thanks for providing the logs, I think those are within norm, displaying WebUI trying to connect to some of the pre-configured OpenAI APIs (expected to fail if they are not running).

To debug, could you please run:

harbor down
harbor up
harbor logs -n 200

And paste the full output here?

I suspect there's something with the default volume configs and running harbor with sudo (not supported). So you can also try to setup running docker without sudo and try spinning things up again

av commented 4 months ago

Ah, actually, there's ollama in those logs. When harbor is running, could you please try running one of the items from the Ollama http catalog?

alsoasnerd commented 4 months ago

[+] Running 3/3 ✔ Container ollama Removed 0.8s ✔ Container webui Removed 12.7s ✔ Network harbor_harbor-network Removed 0.1s [+] Running 3/3 ✔ Network harbor_harbor-network Created 0.0s ✔ Container webui Started 0.4s ✔ Container ollama Started 0.5s ollama | 2024/08/01 21:06:36 routes.go:1109: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost: https://localhost: http://127.0.0.1 https://127.0.0.1 http://127.0.0.1: https://127.0.0.1: http://0.0.0.0 https://0.0.0.0 http://0.0.0.0: https://0.0.0.0: app:// file:// tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" ollama | time=2024-08-01T21:06:36.932Z level=INFO source=images.go:781 msg="total blobs: 5" ollama | time=2024-08-01T21:06:36.932Z level=INFO source=images.go:788 msg="total unused blobs removed: 0" ollama | time=2024-08-01T21:06:36.932Z level=INFO source=routes.go:1156 msg="Listening on [::]:11434 (version 0.3.2)" ollama | time=2024-08-01T21:06:36.933Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama930778192/runners webui | Loading WEBUI_SECRET_KEY from file, not provided as an environment variable. webui | Generating WEBUI_SECRET_KEY webui | Loading WEBUI_SECRET_KEY from .webui_secret_key ollama | time=2024-08-01T21:06:40.456Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cuda_v11 rocm_v60102 cpu cpu_avx cpu_avx2]" ollama | time=2024-08-01T21:06:40.456Z level=INFO source=gpu.go:205 msg="looking for compatible GPUs" ollama | time=2024-08-01T21:06:40.714Z level=WARN source=amd_linux.go:59 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" ollama | time=2024-08-01T21:06:40.714Z level=WARN source=amd_linux.go:201 msg="amdgpu too old gfx000" gpu=0 ollama | time=2024-08-01T21:06:40.714Z level=INFO source=amd_linux.go:360 msg="no compatible amdgpu devices detected" ollama | time=2024-08-01T21:06:40.714Z level=INFO source=types.go:105 msg="inference compute" id=GPU-a7c323b1-adc3-f60a-035c-49d246e01a4a library=cuda compute=7.5 driver=12.5 name="NVIDIA GeForce GTX 1650" total="3.6 GiB" available="3.6 GiB" webui | USER_AGENT environment variable not set, consider setting it to identify your requests. webui | INFO: Started server process [1] webui | INFO: Waiting for application startup. webui | /app webui | webui | _ webui | / \ _ _ \ \ / /| | | | | | | webui | | | | | ' \ / \ ' \ \ \ /\ / / \ ' | | | || | webui | | || | |) | / | | | \ V V / / |) | || || | webui | \/| ./ _|| || _/_/ _|_._/ \/|__| webui | || webui | webui | webui | v0.3.10 - building the best open-source AI user interface. webui | webui | https://github.com/open-webui/open-webui webui | webui | INFO [alembic.runtime.migration] Context impl SQLiteImpl. webui | INFO [alembic.runtime.migration] Will assume non-transactional DDL. webui | INFO: Application startup complete. webui | INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit) webui | INFO [apps.openai.main] get_all_models() webui | ERROR [apps.openai.main] Connection error: webui | ERROR [apps.openai.main] Connection error: webui | ERROR [apps.openai.main] Connection error: webui | INFO [apps.ollama.main] get_all_models() webui | ERROR [apps.ollama.main] Connection error: webui | INFO: 172.22.0.1:46668 - "GET /ws/socket.io/?EIO=4&transport=polling&t=P4Fn3v0 HTTP/1.1" 200 OK webui | INFO: ('172.22.0.1', 48402) - "WebSocket /ws/socket.io/?EIO=4&transport=websocket&sid=WXYKuM49ispwidb9AAAA" [accepted] webui | INFO: connection open webui | INFO [apps.openai.main] get_all_models() webui | INFO [apps.openai.main] get_all_models() webui | ERROR [apps.openai.main] Connection error: webui | ERROR [apps.openai.main] Connection error: webui | ERROR [apps.openai.main] Connection error: webui | ERROR [apps.openai.main] Connection error: webui | ERROR [apps.openai.main] Connection error: webui | ERROR [apps.openai.main] Connection error: webui | INFO [apps.ollama.main] get_all_models() webui | INFO [apps.ollama.main] get_all_models() webui | INFO [apps.openai.main] get_all_models() webui | ERROR [apps.ollama.main] Connection error: webui | ERROR [apps.ollama.main] Connection error: webui | INFO: 172.22.0.1:46668 - "POST /ws/socket.io/?EIO=4&transport=polling&t=P4Fn6qt&sid=WXYKuM49ispwidb9AAAA HTTP/1.1" 200 OK webui | INFO: 172.22.0.1:48410 - "GET /ws/socket.io/?EIO=4&transport=polling&t=P4Fn6qv&sid=WXYKuM49ispwidb9AAAA HTTP/1.1" 200 OK webui | ERROR [apps.openai.main] Connection error: webui | ERROR [apps.openai.main] Connection error: webui | ERROR [apps.openai.main] Connection error: webui | INFO [apps.ollama.main] get_all_models() webui | ERROR [apps.ollama.main] Connection error: webui | INFO: 127.0.0.1:49482 - "GET /health HTTP/1.1" 200 OK webui | INFO [apps.openai.main] get_all_models()

alsoasnerd commented 4 months ago

Ah, actually, there's ollama in those logs. When harbor is running, could you please try running one of the items from the Ollama http catalog?

av commented 4 months ago

Thank you for providing these! Ollama responses look valid, so it's running as expected. But due to some reason, WebUI is unable to establish connection to it.

Harbor comes with exec to run stuff in the services, we can try pinging ollama from within the WebUI container:

 harbor exec webui curl http://ollama:11434/api/ps

It should return the same response as the one made from the host if everything works as expected

av commented 4 months ago

Hmm.. I just tried to run WebUI completely without ollama and it's not establishing the same behavior as on the screenshot 🤔

# completely exclude ollama from running
harbor defaults webui

$ ▼ harbor ps
NAME      IMAGE                                COMMAND           SERVICE   CREATED              STATUS                        PORTS
webui     ghcr.io/open-webui/open-webui:main   "bash start.sh"   webui     About a minute ago   Up About a minute (healthy)   0.0.0.0:33801->8080/tcp, :::33801->8080/tcp

# webui logs
webui  | ERROR [apps.ollama.main] Connection error: Cannot connect to host ollama:11434 ssl:default [Name or service not known]
webui  | ERROR [apps.ollama.main] Connection error: Cannot connect to host ollama:11434 ssl:default [Name or service not known]

Still, webui opens and allows me to login as expected

alsoasnerd commented 4 months ago

Tried to run with rootless docker, doesn't fix the issue.

alsoasnerd commented 4 months ago

harbor.sh exec webui curl http://ollama:11434/api/ps Service webui is running. Executing command... {"models":[]}%

av commented 4 months ago

Ok, so at this point it looks like everything is fine at the container level

I'm afraid we're approaching a point where I need to ask you for a HAR from Chrome Dev Tools, haha. Open the page that fails to load, open the Dev Tools, refresh the page, then in the "Network" tab image

Alternatively, a screenshot of the Network tab contents with as many requests as possible

alsoasnerd commented 4 months ago

vivaldi-webui.har.txt

av commented 4 months ago

@alsoasnerd , sorry, I'm afraid it's empty

alsoasnerd commented 4 months ago

Sorry. localhost.har.txt

Seems that OpenWebUI allows to login, but loading takes about 15 seconds.

av commented 4 months ago

🤔 Yes, your screenshot indeed displays a loading screen

I still think that this HAR is incomplete, could you please just screenshot the entirety of the Network tab upon load?

Here's hitting http://localhost:33801/ with no cache and log preserved on my machine, for reference: localhost.har.txt

alsoasnerd commented 4 months ago

image image

av commented 4 months ago

Thanks! This appears to be happening after the redirect to /auth URL. Could you please:

Also, just as a side-note in case unexpected - even despite the Open WebUI is fully local - it requires a signup on the first run (data stays fully on your machine) by default

alsoasnerd commented 4 months ago

I know. I use openwebui on my host.

av commented 4 months ago

🤔 Also just tested a from-scratch setup on another machine - unfortunately didn't reproduce the problem with Open WebUI being stuck on a loading screen

alsoasnerd commented 4 months ago

image image image

av commented 4 months ago

Thanks for capturing these,

Basically, sequencing looks nearly identical to what I have, but every HTTP request comes with a ~10s wait (?!). Based on the timeline view at the top, this delay is also fairly consistent. You can confirm that by opening "Timing" section in the details for some of those requests - they should have ~10s "Waiting for server response" (a few ms normally)

Now we should determine if that's the host network or host <-> docker problem

alsoasnerd commented 4 months ago

No, for curl and docker ps it's instantaneous image

av commented 4 months ago

How about running anything within the WebUI container? It's really puzzling what could throttle a network like that in a local setup.

If you're still up to it, could you please bump to the latest Harbor version and run harbor info?

alsoasnerd commented 4 months ago

harbor.sh info Client: Version: 27.1.1 Context: default Debug Mode: false Plugins: buildx: Docker Buildx (Docker Inc.) Version: 0.16.2 Path: /usr/lib/docker/cli-plugins/docker-buildx compose: Docker Compose (Docker Inc.) Version: 2.29.1 Path: /usr/lib/docker/cli-plugins/docker-compose

Server: Containers: 9 Running: 8 Paused: 0 Stopped: 1 Images: 9 Server Version: 27.1.1 Storage Driver: overlay2 Backing Filesystem: btrfs Supports d_type: true Using metacopy: true Native Overlay Diff: false userxattr: false Logging Driver: json-file Cgroup Driver: systemd Cgroup Version: 2 Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog Swarm: inactive Runtimes: runc io.containerd.runc.v2 nvidia Default Runtime: runc Init Binary: docker-init containerd version: 8fc6bcff51318944179630522a095cc9dbf9f353.m runc version: init version: de40ad0 Security Options: seccomp Profile: builtin cgroupns Kernel Version: 6.10.2-arch1-1 Operating System: Archcraft OSType: linux Architecture: x86_64 CPUs: 16 Total Memory: 7.133GiB Name: arch ID: d3bde8c1-930d-40f5-89e3-a0f62ed062b3 Docker Root Dir: /var/lib/docker Debug Mode: false Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false

alsoasnerd commented 4 months ago

Oh, seems that pull the latest version fixes the issue. Thanks for helping!

av commented 4 months ago

Good to hear! I can only guess it was related to the previous setup where Open WebUI was always probing compatible services even when they are not running