linuxserver / docker-kasm

Kasm Workspaces platform provides enterprise-class orchestration, data loss prevention, and web streaming technology to enable the delivery of containerized workloads to your browser.
GNU General Public License v3.0
315 stars 28 forks source link

[BUG] Unraid: upper fs does not support RENAME_WHITEOUT. #56

Closed nspitko closed 4 months ago

nspitko commented 6 months ago

Is there an existing issue for this?

Current Behavior

While this image is running on Unraid, there is a frequent (>1/m) spew in the logs:

May  9 23:17:20 jibril kernel: overlayfs: upper fs does not support RENAME_WHITEOUT.
May  9 23:17:22 jibril kernel: overlayfs: upper fs does not support RENAME_WHITEOUT.
May  9 23:17:22 jibril kernel: overlayfs: upper fs does not support RENAME_WHITEOUT.
May  9 23:17:22 jibril kernel: docker0: port 1(veth37da7ab) entered blocking state
May  9 23:17:22 jibril kernel: docker0: port 1(veth37da7ab) entered disabled state
May  9 23:17:22 jibril kernel: device veth37da7ab entered promiscuous mode
May  9 23:17:22 jibril kernel: eth0: renamed from veth56a27fb
May  9 23:17:22 jibril kernel: docker0: port 1(veth37da7ab) entered blocking state
May  9 23:17:22 jibril kernel: docker0: port 1(veth37da7ab) entered forwarding state
May  9 23:17:22 jibril kernel: docker0: port 1(veth37da7ab) entered disabled state
May  9 23:17:22 jibril kernel: veth56a27fb: renamed from eth0
May  9 23:17:22 jibril kernel: docker0: port 1(veth37da7ab) entered disabled state
May  9 23:17:22 jibril kernel: device veth37da7ab left promiscuous mode
May  9 23:17:22 jibril kernel: docker0: port 1(veth37da7ab) entered disabled state

This coincides with the container log spewing:

time="2024-05-09T23:30:11.018606437-07:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2024-05-09T23:30:11.018766017-07:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2024-05-09T23:30:11.018786907-07:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2024-05-09T23:30:11.019033491-07:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ffe3bf2d66c0af80420d615b2b4460653d7e9cda3a121362e6d44b4d9eb2e32e pid=2286 runtime=io.containerd.runc.v2

Expected Behavior

Kasm should not pollute the log file

Steps To Reproduce

1) Install image via the community app store in unraid 2) Launch an instance 3) Observe logs

Environment

- OS: Unraid
- How docker service was installed: Community app store
- /opt mount: /mnt/cache/appdata/kasm
- /mnt/cache format: zfs mirror

CPU architecture

x86-64

Docker creation

Extra params: --gpus all
/opt:  /mnt/cache/appdata/kasm

Container logs

[migrations] started
[migrations] no migrations found
usermod: no changes
───────────────────────────────────────

      ██╗     ███████╗██╗ ██████╗ 
      ██║     ██╔════╝██║██╔═══██╗
      ██║     ███████╗██║██║   ██║
      ██║     ╚════██║██║██║   ██║
      ███████╗███████║██║╚██████╔╝
      ╚══════╝╚══════╝╚═╝ ╚═════╝ 

   Brought to you by linuxserver.io
───────────────────────────────────────

To support LSIO projects visit:
https://www.linuxserver.io/donate/

───────────────────────────────────────
GID/UID
───────────────────────────────────────

User UID:    911
User GID:    911
───────────────────────────────────────

[custom-init] No custom files found, skipping...
[ls.io-init] done.
time="2024-05-09T23:37:22.104006866-07:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2024-05-09T23:37:22.104487801-07:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2024-05-09T23:37:22.104511125-07:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2024-05-09T23:37:22.104811290-07:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/57be05bf9fd97ae2ac5ddd1d77500159f531cfc24a8d95225c9b92800d682388 pid=2211 runtime=io.containerd.runc.v2
time="2024-05-09T23:37:42.587858142-07:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2024-05-09T23:37:42.588029725-07:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2024-05-09T23:37:42.588048260-07:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2024-05-09T23:37:42.588305415-07:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d20e75c39f679956719fbb10e7ef32affdb2dd14c6ea9bdab7aa3e84e2c2492d pid=2440 runtime=io.containerd.runc.v2
time="2024-05-09T23:38:13.335136400-07:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2024-05-09T23:38:13.335292053-07:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2024-05-09T23:38:13.335321068-07:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2024-05-09T23:38:13.335576799-07:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e2080790b4e9fe5d5a75cb56602c69aa5f97da66d138c23e1e6995f4262475d4 pid=2580 runtime=io.containerd.runc.v2

(This last segment goes on for quite a while, truncated for readability)
github-actions[bot] commented 6 months ago

Thanks for opening your first issue here! Be sure to follow the relevant issue templates, or risk having this issue marked as invalid.

LinuxServer-CI commented 5 months ago

This issue has been automatically marked as stale because it has not had recent activity. This might be due to missing feedback from OP. It will be closed if no further activity occurs. Thank you for your contributions.

nspitko commented 4 months ago

Did some investigation, and it seems this is a known/expected issue with openzfs, which is fixed in version 2.2. This will go away once Unraid updates to this version.

github-actions[bot] commented 3 months ago

This issue is locked due to inactivity