Closed rkdavies closed 1 year ago
Can you tell me what's not working? Can you share the output from omni logs omnimount
?
@kelinger I am having the same issue. I noticed Plex stopped working, and still is after a full OS reinstall, OmniStream reinstall, and rclone reconfig. Seems the issue is related to RClone not properly mounting the remote share in ~/Omnistream/mnt
/plexabyte@Plexabyte:~/OmniStream/mnt$ omni logs omnimount
omnimount | rclone v1.64.0
omnimount | - os/version: debian 12.1 (64 bit)
omnimount | - os/kernel: 5.15.0-84-generic (x86_64)
omnimount | - os/type: linux
omnimount | - os/arch: amd64
omnimount | - go/version: go1.21.1
omnimount | - go/linking: static
omnimount | - go/tags: none
omnimount |
omnimount | mergerfs v2.37.1
omnimount |
omnimount | https://github.com/trapexit/mergerfs
omnimount | https://github.com/trapexit/support
omnimount |
omnimount | ISC License (ISC)
omnimount |
omnimount | Copyright 2023, Antonio SJ Musumeci trapexit@spawn.link
omnimount |
omnimount | Permission to use, copy, modify, and/or distribute this software for
omnimount | any purpose with or without fee is hereby granted, provided that the
omnimount | above copyright notice and this permission notice appear in all
omnimount | copies.
omnimount |
omnimount | THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
omnimount | WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
omnimount | WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE
omnimount | AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL
omnimount | DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR
omnimount | PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
omnimount | TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
omnimount | PERFORMANCE OF THIS SOFTWARE.
omnimount |
omnimount |
omnimount | Starting vnstat
omnimount | No interfaces found in database, adding available interfaces...
omnimount | Interface "eth0" added with 10000 Mbit bandwidth limit.
omnimount | -> 1 new interface found.
omnimount | Limits can be modified using the configuration file. See "man vnstat.conf".
omnimount | Unwanted interfaces can be removed from monitoring with "vnstat --remove".
omnimount |
omnimount | Configuration:
omnimount | MERGEMOUNT=cloud
omnimount | RCLONESERVICE=google
omnimount | RCLONEMOUNT=google
omnimount | UNSYNCED=unsynced
omnimount | UPLOADCACHE=uploadcache
omnimount | USENFS=false
omnimount | NFSREMOTE=
omnimount | NFSLOCAL=
omnimount | MEDIA=Media
omnimount | TURBOMAX=20
omnimount | Adding group omniuser' (GID 1000) ... omnimount | Done. omnimount | Adding user
omniuser' ...
omnimount | Adding new user omniuser' (1000) with group
omniuser (1000)' ...
omnimount | Creating home directory /home/omniuser' ... omnimount | Copying files from
/etc/skel' ...
omnimount | Adding new user omniuser' to supplemental / extra groups
users' ...
omnimount | Adding user omniuser' to group
users' ...
omnimount | Cleaning up leftovers
omnimount | Starting services
omnimount | VFSMAX=100G
omnimount | VFSAGE=48h
omnimount | VFSPOLL=5m
omnimount | VFSREAD=2G
omnimount | VFSCACHE=yes
omnimount | DIRCACHE=96h
omnimount | NFS Disabled
omnimount | OmniMount Caching: enabled
omnimount | {
omnimount | "jobid": 1
omnimount | }
omnimount |
omnimount | Startup complpete
Same issue here, and exactly the same log output. When I try to check the ~/OmniStream/mnt/cloud folder, the Media subfolder is there but it is empty so none of the services (Sonaar, Plex, etc.) are able to do anything.
I tried omni restart
initially then when I saw that the OmniMount container wouldn't come up, I did omni clean
and omni up
which allowed the container to start but it isn't connecting to Drive properly.
I was able to mount the share manually from rclone in the same directory, but Omnimount doesn't seem to detect the manually mounted share.
I think I know the fix but it may take an hour or so to implement. Could you try setting a dummy variable in omni edit
? There are three new fields for NFS. The USENFS already equals false, which is correct. But my guess is that the blank parameters (which aren't used since its disabled) are still causing an issue somewhere. Can you try using it like this and see if it starts up correctly?
USENFS=false
NFSREMOTE=test
NFSLOCAL=test
Note that, once I get this working, I'll mark this as version 1.5 so that future changes can be avoided when buggy.
That has worked for me. There was some error about permissions but I couldn't see what it said before the container downloads took over but Sonaar and Plex are no longer complaining and I can see the folders under mnt
. Thanks for the quick response!
This didn't work out for me. Still getting the same log output.
Mounter failed
rclone v1.64.0
- os/version: debian 12.1 (64 bit)
- os/kernel: 5.18.0-0.bpo.1-amd64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.21.1
- go/linking: static
- go/tags: none
mergerfs v2.37.1
https://github.com/trapexit/mergerfs
https://github.com/trapexit/support
ISC License (ISC)
Copyright 2023, Antonio SJ Musumeci <trapexit@spawn.link>
Permission to use, copy, modify, and/or distribute this software for
any purpose with or without fee is hereby granted, provided that the
above copyright notice and this permission notice appear in all
copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE
AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL
DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR
PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.
Starting vnstat
Configuration:
MERGEMOUNT=cloud
RCLONESERVICE=Gdrive
RCLONEMOUNT=Gdrive
UNSYNCED=unsynced
UPLOADCACHE=uploadcache
USENFS=false
NFSREMOTE=test
NFSLOCAL=test
MEDIA=Media
TURBOMAX=20
addgroup: The group `omniuser' already exists.
adduser: The user `omniuser' already exists.
Cleaning up leftovers
Starting services
VFSMAX=100G
VFSAGE=48h
VFSPOLL=5m
VFSREAD=2G
VFSCACHE=no
DIRCACHE=96h
NFS Disabled
OmniMount Caching: disabled
{
"jobid": 1
}
touch: failed to close '/mnt/Gdrive/Media/omnimounted': Operation not permitted
Mounter failed
Did you do a full omni clean
? I tried omni restart
first and that didn't work for me
I just did, and got hung up at the same exact spot :(
I just posted a new OmniStream and OmniMount and have tested it with USENFS as true and false and with NFS parameters populated and null (for non-NFS). All combinations worked for me.
Can those with issues try a full stop (omni clean
) and an omni up
afterward?
Before making any changes.
omnimount |
omnimount | Starting vnstat
omnimount |
omnimount |
omnimount | Configuration:
omnimount | MERGEMOUNT=cloud
omnimount | RCLONESERVICE=gsync-crypt
omnimount | RCLONEMOUNT=gsync-crypt
omnimount | UNSYNCED=unsynced
omnimount | UPLOADCACHE=uploadcache
omnimount | USENFS=false
omnimount | NFSREMOTE=
omnimount | NFSLOCAL=
omnimount | MEDIA=media
omnimount | TURBOMAX=10
omnimount | addgroup: The group omniuser' already exists. omnimount | adduser: The user
omniuser' already exists.
omnimount | Cleaning up leftovers
omnimount | Starting services
omnimount | VFSMAX=10G
omnimount | VFSAGE=5m
omnimount | VFSPOLL=1m
omnimount | VFSREAD=2G
omnimount | VFSCACHE=full
omnimount | DIRCACHE=96h
omnimount | NFS Disabled
omnimount | OmniMount Caching: disabled
omnimount | {
omnimount | "jobid": 1
omnimount | }
omnimount | touch: failed to close '/mnt/gsync-crypt/media/omnimounted': Operation not permitted
omnimount | Mounter failed
autoheal | 19-09-2023 19:26:27 Container /omnimount (ebb7dbb12e2e) found to be restarting - don't restart
autoheal | 19-09-2023 19:26:33 Container /omnimount (ebb7dbb12e2e) found to be restarting - don't restart
autoheal | 19-09-2023 19:26:38 Container /omnimount (ebb7dbb12e2e) found to be restarting - don't restart
omnimount: image: kelinger/omnimount:latest container_name: omnimount hostname: omnimount restart: unless-stopped environment:
Added the test parameters. Did Omni Clean.. Omni up. same results.
omnimount | Starting vnstat
omnimount |
omnimount |
omnimount | Configuration:
omnimount | MERGEMOUNT=cloud
omnimount | RCLONESERVICE=gsync-crypt
omnimount | RCLONEMOUNT=gsync-crypt
omnimount | UNSYNCED=unsynced
omnimount | UPLOADCACHE=uploadcache
omnimount | USENFS=false
omnimount | NFSREMOTE=test
omnimount | NFSLOCAL=test
omnimount | MEDIA=media
omnimount | TURBOMAX=10
omnimount | LOCAL=gsync-crypt
omnimount | addgroup: The group omniuser' already exists. omnimount | adduser: The user
omniuser' already exists.
omnimount | Cleaning up leftovers
omnimount | Starting services
omnimount | VFSMAX=10G
omnimount | VFSAGE=5m
omnimount | VFSPOLL=1m
omnimount | VFSREAD=2G
omnimount | VFSCACHE=full
omnimount | DIRCACHE=96h
omnimount | NFS Disabled
omnimount | mkdir -p /mnt/gsync-crypt
omnimount | OmniMount Caching: disabled
omnimount | {
omnimount | "jobid": 1
omnimount | }
omnimount | touch: failed to close '/mnt/gsync-crypt/media/omnimounted': Operation not permitted
omnimount | Mounter failed
Here's a grab of the docker pull during omni up to see the versions if needed.
[+] Running 59/61 ✔ traefik 3 layers [⣿⣿⣿] 0B/0B Pulled 20.2s ✔ af32133391e6 Pull complete 8.0s ✔ 1022d3e6eb6d Pull complete 19.4s ✔ ef9401db6143 Pull complete 19.5s ✔ nzbget 7 layers [⣿⣿⣿⣿⣿⣿⣿] 0B/0B Pulled 43.5s ✔ 21cc76473522 Pull complete 14.1s ✔ 665a26860e09 Pull complete 14.2s ✔ 4ca2a7f5f963 Pull complete 17.9s ✔ 0ea50c99c96f Pull complete 18.1s ✔ a4ea42dd54b6 Pull complete 38.8s ✔ 35a167e538c3 Pull complete 41.4s ✔ c793f3059520 Pull complete 41.5s ✔ radarr 2 layers [⣿⣿] 0B/0B Pulled 40.0s ✔ 848900726571 Pull complete 37.8s ✔ 704da134961c Pull complete 38.0s ✔ plex 6 layers [⣿⣿⣿⣿⣿⣿] 0B/0B Pulled 45.4s ✔ a70d879fa598 Pull complete 8.3s ✔ c4394a92d1f8 Pull complete 8.3s ✔ 10e6159c56c0 Pull complete 8.4s ✔ d1042fe57e96 Pull complete 44.7s ✔ ac5317c7b384 Pull complete 44.8s ✔ 47414e89d67b Pull complete 44.9s ✔ autoheal 3 layers [⣿⣿⣿] 0B/0B Pulled 7.8s ✔ 7264a8db6415 Pull complete 6.0s ✔ 1ad4eee1074e Pull complete 7.1s ✔ 67695d6b9c5c Pull complete 7.2s ✔ oauth 2 layers [⣿⣿] 0B/0B Pulled 6.5s ✔ 2b233a225090 Pull complete 4.5s ✔ 2e62f1e450fc Pull complete 5.9s ✔ sonarr 8 layers [⣿⣿⣿⣿⣿⣿⣿⣿] 0B/0B Pulled 48.0s ✔ 6ba42e543546 Pull complete 23.8s ✔ a7e73a3e61de Pull complete 23.9s ✔ d353881e21b8 Pull complete 24.0s ✔ 9c3d6634ad79 Pull complete 24.1s ✔ 85b24c3392cd Pull complete 33.5s ✔ fffe0c1dde1f Pull complete 33.8s ✔ 8d5a5f1ad54b Pull complete 46.1s ✔ 8d10608c0317 Pull complete 46.1s ✔ tautulli 7 layers [⣿⣿⣿⣿⣿⣿⣿] 0B/0B Pulled 45.6s ✔ efb124c083f2 Pull complete 7.4s ✔ b81e0d862760 Pull complete 7.5s ✔ d0f59184f0b9 Pull complete 7.7s ✔ af23b7546513 Pull complete 11.3s ✔ c56105954327 Pull complete 11.6s ✔ 012a31c241e7 Pull complete 43.8s ✔ f54f3e8a8f89 Pull complete 43.9s ⠹ overseerr 2 layers [⣦⣿] 115.3MB/161.2MB Pulling 69.3s ⠿ fc5c6e033681 Extracting [===================================> ] 115.3MB/161.2MB 67.3s ✔ 93fd37f728bf Download complete 15.7s ✔ watchtower 3 layers [⣿⣿⣿] 0B/0B Pulled 8.1s ✔ 7e1f4ce8770d Pull complete 4.6s ✔ cc408d374d64 Pull complete 6.0s ✔ 4412f0a27731 Pull complete 7.3s ✔ omnimount 7 layers [⣿⣿⣿⣿⣿⣿⣿] 0B/0B Pulled 46.2s ✔ 012c0b3e998c Pull complete 20.4s ✔ b21f0a19b097 Pull complete 20.5s ✔ b122d670b905 Pull complete 34.0s ✔ 7b0ab2744379 Pull complete 45.4s ✔ fe1f88278674 Pull complete 45.5s ✔ 4f4fb700ef54 Pull complete 45.6s ✔ 33d5339d36b3 Pull complete
I noticed others with this issue have Debian 12.1.. I wonder if there is something OS related?
omnimount | touch: failed to close '/mnt/gsync-crypt/media/omnimounted': Operation not permitted omnimount | Mounter failed omnimount | rclone v1.64.0 omnimount | - os/version: debian 12.1 (64 bit) omnimount | - os/kernel: 6.1.0-11-amd64 (x86_64) omnimount | - os/type: linux omnimount | - os/arch: amd64 omnimount | - go/version: go1.21.1 omnimount | - go/linking: static omnimount | - go/tags: none
From the bugs, its possible that some mounts were left in an unstable state. On the host, go to your mount directory for OmniStream (default = ~/OmniStream/mnt). After an omni clean
you should NOT see your cloud nor rclone mounts but you could possibly still see the unsynced and uploadcache directories (since these are local, not cloud-based, we don't clear them out).
If you do see extra directories (and you may even get an error when you ls
here) then run:
fusermount -uz dirname
(eg "fusermount -uz cloud")
sudo umount -f dirname
sudo rmdir dirname
(If the "rmdir" command tells you that the directory isn't empty, you can try sudo rm -r dirname
instead but make sure that this directory is truly delete-able and not a mount point since you DON'T want to delete the cloud-side of this)
Now try the omni up
again.
@wickedshrapnel - were your errors after I posted the latest fixes and OmniMount containers?
@kelinger I just ran everything again right now.. I also deleted the /mnt/gsync-crypt/media/omnimounted file that it creates. Restarted omnimount and it recreates that file, but the issue seems to be with failing to close that file. Maybe some sort of new permission restrictions in Debian 12.1 that weren't in previous versions of Debian?
Here is the output of commands and subsequent logs.
jason@PlexCloud:~/OmniStream/mnt$ omni clean [+] Running 8/8 ✔ Container omnimount Removed 0.1s ✔ Container oauth Removed 0.4s ✔ Container tautulli Removed 4.8s ✔ Container autoheal Removed 0.6s ✔ Container watchtower Removed 0.5s ✔ Container overseerr Removed 4.6s ✔ Container traefik Removed 4.5s ✔ Network OmniNet Removed 0.2s
traefik Removed record f42d1cb666bbfeb96a942e4129076299 oauth Removed record 6faf991dd5cc8e741be0cc8759d105a0 pc Removed record 4c82461c02ca4d3794c79fe4f8bc49ab tautulli Removed record a6b6c9542f54d4a30f3441d5831ee4f3 nzbget Removed record e69ecf53b1ce3ce3ab611325cbe33b43 radarr Removed record 64f96796b7d36cf75da3a2ce33118455 sonarr Removed record 697213269ae48308efd9c0683861f3b9 overseerr Removed record 2926c221728e804b49b4033e511ca696 Deleted Images: untagged: containrrr/watchtower:latest untagged: containrrr/watchtower@sha256:0ca7a88fd0748aa6f32e50b67eb11148cdb989fc595264c2778c85297a2c1abe deleted: sha256:f847e1adb570c2cc11d1e613cad97baf3cdfe83ddd3c1a29ada848cfbd4f7f3f deleted: sha256:d29c93d18336c2169b9adae0a8fa8ab28b285a74a54db8963cb06a2e0cf709ef deleted: sha256:02fc6ee9c377735bb68a430241511c486b9ab5a6fda0e28edb4865ef85920ba7 deleted: sha256:3a26d205940c7fc9083fa9d5d9a7729c3d5c6eea72c814c7543d9a4b59ffcaaa untagged: traefik:latest untagged: traefik@sha256:429f3398a3cd1aa7436aa4f59d809040d3903506a9d83bee61688bb1429c7693 deleted: sha256:2ae1addee1b2f3bd2ff67edf06e8cf028e1ca44f99a6fbf51dfb0b2eec546949 deleted: sha256:75d31d92f373e86c4b6d8a222845114d1da7af22a7959218d9d24c2b09a15f0a deleted: sha256:295ed1c07685bc29869e2528985b642d88576d88fdf05039242fc22c8c91fa51 deleted: sha256:0d0e4905eb0018b4ec978102695a9548f808ff21c824a5a1e28a8a080c35eed0 untagged: lscr.io/linuxserver/tautulli:latest untagged: lscr.io/linuxserver/tautulli@sha256:9386d3f5fba53804819b6856c8993091db3dda95b79879f685d612e462e6ffe8 deleted: sha256:42cc2b2172f77e196a898868ea68d98f30b83628d10ddfa31969d6d44363f620 deleted: sha256:c2b5a8e2dda7d04b92afb266ee30fe56320c6b8e9d5332936948d62789540888 deleted: sha256:5d0d7a15fec7eb45dd900b013620f9ad5a7c4289ca7c533319705e4d258801d7 untagged: kelinger/omnimount:latest untagged: kelinger/omnimount@sha256:ab264d99a29e81db4394709416ef86eb1983095cc30afa3564f782989915ad78 deleted: sha256:8b345cb8d871b5e15f425e693c9002da92278d7b14ea674fff7f9f676d8935eb deleted: sha256:540f2d9e4be69908d8b25d352ba49bba7855de35bcf14e250fbb145705200960 deleted: sha256:312238220b9b09f1e6e4fba090948c9caa8b76a8eecc9c1dd72d28d464750e23 deleted: sha256:c6716d424993a82daf2e0b2e3eb87d815752c7f0d241a1e33dfbfe026ddece8b deleted: sha256:1b58bd1c17dc4f64d24f95e54fe8705e5505c17820ad7f69e923ea9961d20f54 deleted: sha256:b8692d17aa37fb5ff647b105332aff7fe582306b206b37d82df917fc43bb8ef1 deleted: sha256:9757aca731424e4a7570dfb3701ba96eb06f570fae56cf8c99a4ab850e7860fd deleted: sha256:b8544860ba0b7d8751836ee3b386eb4faa732d87d63f6dec7d5948c520b0c181 untagged: lscr.io/linuxserver/overseerr:latest untagged: lscr.io/linuxserver/overseerr@sha256:b5e038c47d598de471ff769884cb95f6ffeddcad8812582fb2895c44263ce6ff deleted: sha256:7e9babc5c9ea46fb99604c8e90829da9aa49b9a06bc8a7db8105ebeda068dee8 deleted: sha256:6f0d8e48fa2e4934281ad119b46d7e52a33a5959b2d6676a216d176c081556ed deleted: sha256:e84b7d10603d3175c14c6de970f9b719102a720ffbc8cb7a25c6aeac8250e281 deleted: sha256:ef22a7635d82991249b799ed8f742e3c8251e7878dbbc94f0bb0547285029fdc deleted: sha256:11a1490ce64673e0cda24cb92395bdfe2ad368a91c65d1f4584b206974da9b4e deleted: sha256:ec3ca5816c5fc41d2b09f4197225ee05e162e97bb0ae37a2462cc274e6a3c886 deleted: sha256:ca034aed921a9b0f803276adc8e1f2eee160a85e645080563cd34b9a6caed8e8 deleted: sha256:e5800f62c777408c0a859bf3a4a4c9b98ad47bfb9d2ad17ee8949d9722b4a9c2 untagged: thomseddon/traefik-forward-auth:latest untagged: thomseddon/traefik-forward-auth@sha256:b364aa6a4117569163eff793999901f9f5a0c4f7f2da18b4ecbcd140d7b6107b deleted: sha256:c5658e75448ec0655050e9e89ac6693320fd59888cfbdf76bbd3ab5464275079 deleted: sha256:96bc3426e22a39d81b74ded7e9068ba9486cf6e721983bb1180b2f9836eef704 deleted: sha256:37c4148fa37c75b979932e4d519523545354a9bc54be13d43651c9b7a39ce6f0 untagged: willfarrell/autoheal:latest untagged: willfarrell/autoheal@sha256:e1444b02c47a47262465458d23e649afc414df6ce70757e9026fc669de4b129d deleted: sha256:06259621e7f26a16fbf73e7b19d86b14f0c4a72ba2e3fb902afd6642587395fd deleted: sha256:9910eb72d61f56740ef522c1b6795f452f83540fc8e4c8e53bbcf105432e032d deleted: sha256:835dc08c6db93ad5658ff8209e910582d817dfae146625e09c7330b63f57dcf3 deleted: sha256:4693057ce2364720d39e57e85a5b8e0bd9ac3573716237736d6470ec5b7b7230
Total reclaimed space: 1.382GB jason@PlexCloud:~/OmniStream/mnt$ ls -l total 8 drwxr-xr-x 2 jason jason 4096 Sep 16 19:07 unsynced drwxrwxr-x 5 jason jason 4096 Oct 24 2022 uploadcache jason@PlexCloud:~/OmniStream/mnt$ omni up Already up to date. Rebuilding config file...Environment updated
traefik Created traefik.domain.cc (cached)
oauth Created oauth.domain.cc (cached)
pc Created pc.domain.cc (cached)
tautulli Created tautulli.domain.cc (cached)
nzbget Created nzbget.domain.cc (cached)
radarr Created radarr.domain.cc (cached)
sonarr Created sonarr.domain.cc (cached)
overseerr Created overseerr.domain.cc (cached)
[+] Running 61/61
✔ overseerr 2 layers [⣿⣿] 0B/0B Pulled 72.8s
✔ fc5c6e033681 Pull complete 71.1s
✔ 93fd37f728bf Pull complete 71.2s
✔ radarr 6 layers [⣿⣿⣿⣿⣿⣿] 0B/0B Pulled 32.9s
✔ b81e0d862760 Pull complete 8.0s
✔ d0f59184f0b9 Pull complete 8.0s
✔ af23b7546513 Pull complete 11.7s
✔ c56105954327 Pull complete 12.0s
✔ 848900726571 Pull complete 31.0s
✔ 704da134961c Pull complete 31.1s
✔ omnimount 7 layers [⣿⣿⣿⣿⣿⣿⣿] 0B/0B Pulled 44.4s
✔ 012c0b3e998c Pull complete 20.5s
✔ b21f0a19b097 Pull complete 20.6s
✔ b122d670b905 Pull complete 33.5s
✔ 7b0ab2744379 Pull complete 43.7s
✔ fe1f88278674 Pull complete 43.7s
✔ 4f4fb700ef54 Pull complete 43.7s
✔ 33d5339d36b3 Pull complete 43.8s
✔ tautulli 3 layers [⣿⣿⣿] 0B/0B Pulled 43.9s
✔ efb124c083f2 Pull complete 7.9s
✔ 012a31c241e7 Pull complete 42.0s
✔ f54f3e8a8f89 Pull complete 42.1s
✔ plex 6 layers [⣿⣿⣿⣿⣿⣿] 0B/0B Pulled 43.3s
✔ a70d879fa598 Pull complete 7.6s
✔ c4394a92d1f8 Pull complete 7.8s
✔ 10e6159c56c0 Pull complete 7.8s
✔ d1042fe57e96 Pull complete 42.7s
✔ ac5317c7b384 Pull complete 42.8s
✔ 47414e89d67b Pull complete 42.8s
✔ sonarr 8 layers [⣿⣿⣿⣿⣿⣿⣿⣿] 0B/0B Pulled 47.0s
✔ 6ba42e543546 Pull complete 30.5s
✔ a7e73a3e61de Pull complete 30.6s
✔ d353881e21b8 Pull complete 30.8s
✔ 9c3d6634ad79 Pull complete 30.9s
✔ 85b24c3392cd Pull complete 37.7s
✔ fffe0c1dde1f Pull complete 38.0s
✔ 8d5a5f1ad54b Pull complete 45.2s
✔ 8d10608c0317 Pull complete 45.3s
✔ traefik 4 layers [⣿⣿⣿⣿] 0B/0B Pulled 20.3s
✔ 7264a8db6415 Pull complete 5.2s
✔ af32133391e6 Pull complete 7.0s
✔ 1022d3e6eb6d Pull complete 19.2s
✔ ef9401db6143 Pull complete 19.4s
✔ nzbget 7 layers [⣿⣿⣿⣿⣿⣿⣿] 0B/0B Pulled 40.1s
✔ 21cc76473522 Pull complete 11.2s
✔ 665a26860e09 Pull complete 11.3s
✔ 4ca2a7f5f963 Pull complete 15.3s
✔ 0ea50c99c96f Pull complete 15.6s
✔ a4ea42dd54b6 Pull complete 35.8s
✔ 35a167e538c3 Pull complete 38.3s
✔ c793f3059520 Pull complete 38.4s
✔ watchtower 3 layers [⣿⣿⣿] 0B/0B Pulled 8.7s
✔ 7e1f4ce8770d Pull complete 5.4s
✔ cc408d374d64 Pull complete 7.0s
✔ 4412f0a27731 Pull complete 8.0s
✔ oauth 2 layers [⣿⣿] 0B/0B Pulled 1.3s
✔ 2b233a225090 Pull complete 0.3s
✔ 2e62f1e450fc Pull complete 0.8s
✔ autoheal 2 layers [⣿⣿] 0B/0B Pulled 7.7s
✔ 1ad4eee1074e Pull complete 6.9s
✔ 67695d6b9c5c Pull complete 7.0s
[+] Running 12/12
✔ Network OmniNet Created 0.1s
✔ Container watchtower Started 1.8s
✔ Container traefik Healthy 12.3s
✔ Container autoheal Started 1.7s
✘ Container omnimount Error 16.3s
✔ Container oauth Started 1.4s
✔ Container tautulli Started 1.4s
✔ Container radarr Created 0.0s
✔ Container sonarr Created 0.0s
✔ Container pc Created 0.0s
✔ Container overseerr Started 12.5s
✔ Container nzbget Created 0.0s
dependency failed to start: container omnimount is unhealthy
jason@PlexCloud:~/OmniStream/mnt$ omni logs omnimount
omnimount | rclone v1.64.0
omnimount | - os/version: debian 12.1 (64 bit)
omnimount | - os/kernel: 6.1.0-11-amd64 (x86_64)
omnimount | - os/type: linux
omnimount | - os/arch: amd64
omnimount | - go/version: go1.21.1
omnimount | - go/linking: static
omnimount | - go/tags: none
omnimount |
omnimount |
omnimount | mergerfs v2.37.1
omnimount |
omnimount |
omnimount | https://github.com/trapexit/mergerfs
omnimount | https://github.com/trapexit/support
omnimount |
omnimount |
omnimount | ISC License (ISC)
omnimount |
omnimount |
omnimount | Copyright 2023, Antonio SJ Musumeci trapexit@spawn.link
omnimount |
omnimount |
omnimount | Permission to use, copy, modify, and/or distribute this software for
omnimount | any purpose with or without fee is hereby granted, provided that the
omnimount | above copyright notice and this permission notice appear in all
omnimount | copies.
omnimount |
omnimount |
omnimount | THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
omnimount | WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
omnimount | WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE
omnimount | AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL
omnimount | DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR
omnimount | PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
omnimount | TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
omnimount | PERFORMANCE OF THIS SOFTWARE.
omnimount |
omnimount |
omnimount |
omnimount |
omnimount | Starting vnstat
omnimount | No interfaces found in database, adding available interfaces...
omnimount | Interface "eth0" added with 10000 Mbit bandwidth limit.
omnimount | -> 1 new interface found.
omnimount | Limits can be modified using the configuration file. See "man vnstat.conf".
omnimount | Unwanted interfaces can be removed from monitoring with "vnstat --remove".
omnimount |
omnimount |
omnimount | Configuration:
omnimount | MERGEMOUNT=cloud
omnimount | RCLONESERVICE=gsync-crypt
omnimount | RCLONEMOUNT=gsync-crypt
omnimount | UNSYNCED=unsynced
omnimount | UPLOADCACHE=uploadcache
omnimount | USENFS=false
omnimount | NFSREMOTE=test
omnimount | NFSLOCAL=test
omnimount | MEDIA=media
omnimount | TURBOMAX=10
omnimount | LOCAL=gsync-crypt
omnimount | Adding group omniuser' (GID 1000) ... omnimount | Done. omnimount | Adding user
omniuser' ...
omnimount | Adding new user omniuser' (1000) with group
omniuser (1000)' ...
omnimount | Creating home directory /home/omniuser' ... omnimount | Copying files from
/etc/skel' ...
omnimount | Adding new user omniuser' to supplemental / extra groups
users' ...
omnimount | Adding user omniuser' to group
users' ...
omnimount | Cleaning up leftovers
omnimount | Starting services
omnimount | VFSMAX=10G
omnimount | VFSAGE=5m
omnimount | VFSPOLL=1m
omnimount | VFSREAD=2G
omnimount | VFSCACHE=full
omnimount | DIRCACHE=96h
omnimount | NFS Disabled
omnimount | mkdir -p /mnt/gsync-crypt
omnimount | OmniMount Caching: disabled
omnimount | {
omnimount | "jobid": 1
omnimount | }
omnimount | touch: failed to close '/mnt/gsync-crypt/media/omnimounted': Operation not permitted
omnimount | Mounter failed
Can you try an omni update
to make sure you're using the latest OmniMount?
Done. Same issue.
jason@PlexCloud:~/OmniStream/mnt$ omni update Updating OmniStream from GIT Rebuilding OmniStream Docker stack configuration Determining DNS assignments
Updating Docker containers with latest versions [+] Pulling 11/11 ✔ radarr Pulled 0.9s ✔ oauth Pulled 0.4s ✔ overseerr Pulled 0.9s ✔ nzbget Pulled 0.9s ✔ sonarr Pulled 0.9s ✔ traefik Pulled 0.4s ✔ omnimount Pulled 0.4s ✔ watchtower Pulled 0.3s ✔ autoheal Pulled 0.3s ✔ plex Pulled 0.3s ✔ tautulli Pulled 0.9s
Ensuring running environment matches latest versions and configs [+] Running 7/7 ✔ Container tautulli Running 0.0s ✔ Container autoheal Running 0.0s ✔ Container traefik Healthy 0.5s ✔ Container oauth Running 0.0s ✔ Container watchtower Running 0.0s ✔ Container overseerr Running 0.0s ✘ Container omnimount Error 0.5s dependency failed to start: container omnimount is unhealthy
What directories are showing in ~/OmniStream/mnt ? Do you see cloud, gsync-crypt, unsynced, and uploadcache?
If you see cloud and gsync-crypt, do they show the files you'd expect to see or is one (likely cloud) empty or near-empty?
The mount failed so gsync-crypt doesn't work. Nothing showing in cloud. Are you testing on Debian 12.1?
jason@PlexCloud:~/OmniStream/mnt$ ls -l
ls: cannot access 'gsync-crypt': Transport endpoint is not connected
total 12
drwxr-xr-x 2 jason jason 4096 Sep 19 16:11 cloud
d????????? ? ? ? ? ? gsync-crypt
drwxr-xr-x 2 jason jason 4096 Sep 16 19:07 unsynced
drwxrwxr-x 5 jason jason 4096 Oct 24 2022 uploadcache
jason@PlexCloud:~/OmniStream/mnt$
So after omni down
or omni clean
does it correctly remove the gsync-crypt directory or does it remain like this? [sorry if some of these questions sound redundant or nearly identical to earlier ones but I believe I'm zeroing in on it]
I am running Debian 12 (doesn't appear to be 12.1). However, this "shouldn't" matter because that is the whole point behind Docker: the containers themselves run in Debian, Ubuntu, Alpine, etc. and some use much older versions than "latest."
$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 12 (bookworm)
Release: 12
Codename: bookworm
it removes them.
jason@PlexCloud:~/OmniStream/mnt$ omni down
[+] Running 8/8
✔ Container omnimount Removed 0.0s
✔ Container autoheal Removed 0.6s
✔ Container oauth Removed 0.4s
✔ Container tautulli Removed 4.8s
✔ Container overseerr Removed 4.6s
✔ Container watchtower Removed 0.4s
✔ Container traefik Removed 4.4s
✔ Network OmniNet Removed 0.2s
traefik Removed record 5314a7ffc19d115ad0ab24f5bcd29046
oauth Removed record c54c275b9f6266da1eee618a791f16f5
pc Removed record 7dfb1c38fa96433742219bb67d831e6a
tautulli Removed record ed64e5c2ffae4cce159f118486c9180b
nzbget Removed record 93db32dc3c7ea12679304d69cdc7fb69
radarr Removed record 8b9b79d208dae66abae5b1b7fe9bdfd6
sonarr Removed record b6edd99fa15aa40d283c8bd1d797e476
overseerr Removed record f3dfdcd1cd356f542c133156649a31a5
jason@PlexCloud:~/OmniStream/mnt$ ls -l
total 8
drwxr-xr-x 2 jason jason 4096 Sep 16 19:07 unsynced
drwxrwxr-x 5 jason jason 4096 Oct 24 2022 uploadcache
On the host, if you type logs
and then cat rclone.log
is there anything at the end indicating what error(s) occurred trying to mount gsync-crypt?
This error started today. Repeating over and over for the last 4 hours.
2023/09/19 17:01:37 ERROR : media/omnimounted: WriteFileHandle: Can't open for write without O_TRUNC on existing file without --vfs-cache-mode >= writes
Eureka! That clears it all up. Since I had to rebuild OmniMount, it picked up a newer version of Rclone. Apparently, the caching/non-caching parameters have some changes, and what worked in the old doesn't work with the new. I've been testing with NFS on and NFS off and with and without the NFS parameters set (for either config), but I didn't test caching on/off (it's on for me when I'm not using NFS) since I didn't touch any code related to that. It seems, though, that Rclone did touch related code, so I have something I can test/fix now.
Stay tuned...
I tried editing the vfs.conf in omnimount config folder. I changed it from full to writes.. Didn't make a difference. Just FYI
This is all I have in my vfs.conf file.
VFSMAX=10G
VFSAGE=5m
VFSPOLL=1m
VFSCACHE=writes
Sorry... its not THAT flexible. VFSCACHE values for OmniMount are just "yes" and "anythingelseyouwanttoputhere" which gets translated as "no."
Update: setting mine to "VFSCACHE=no" effectively gives me the same error as you. Now that I can reproduce it, I should be able to fix it.
Sweet! Thanks for all you do!
Try omni update
now. It should pull a newer omnimount which should work with VFSCACHE=no
Fixed! Thanks!
Updating Docker containers with latest versions
[+] Pulling 18/18
✔ plex Pulled 0.4s
✔ traefik Pulled 0.4s
✔ overseerr Pulled 1.0s
✔ oauth Pulled 0.3s
✔ omnimount 7 layers [⣿⣿⣿⣿⣿⣿⣿] 0B/0B Pulled 6.7s
✔ 012c0b3e998c Already exists 0.0s
✔ e0ea5a4d6007 Pull complete 0.2s
✔ 681cda719ce6 Pull complete 4.1s
✔ 11827aebc04f Pull complete 6.1s
✔ 4ca2ca2784d3 Pull complete 6.1s
✔ 4f4fb700ef54 Pull complete 6.2s
✔ e847195ab8a9 Pull complete 6.2s
✔ tautulli Pulled 1.0s
✔ nzbget Pulled 1.0s
✔ radarr Pulled 0.9s
✔ sonarr Pulled 1.0s
✔ autoheal Pulled 0.4s
✔ watchtower Pulled 0.4s
Ensuring running environment matches latest versions and configs
[+] Running 11/11
✔ Container tautulli Running 0.0s
✔ Container traefik Healthy 0.8s
✔ Container overseerr Running 0.0s
✔ Container oauth Running 0.0s
✔ Container omnimount Healthy 11.3s
✔ Container autoheal Running 0.0s
✔ Container watchtower Running 0.0s
✔ Container sonarr Started 12.0s
✔ Container radarr Started 11.7s
✔ Container pc Started 11.9s
✔ Container nzbget Started
Woohoo!
I appreciate your patience. Some of this "back and forth" can get tedious, but it helped zero in on the problem.
For anyone else following along, this is now posted as "omnimount:1.5" in Docker Hub. I wouldn't recommend forcing it to use that version because this project isn't big enough to support all combinations of configurations and versions. If something changes, I usually have to tweak several things, and if your system doesn't update accordingly, results may be "unexpected."
But I can likewise screw things up, as this thread shows, so temporarily forcing OmniMount to use an older version could get you past it, especially if I'm unavailable to fix it ASAP.
@kelinger Just want to confirm exactly what the fix is at this time. I did a full omni clean
, reboot, apt update/upgrade/autoremove
, added VFSCACHE=no
in vfs.conf, omni up,
and still getting no rclone directory.
Should I temporarily force 1.5 instead of latest?
@meharrington90 - first, only disable VFSCACHE if you need to. The other user (wickedshrapnel) was already disabling their cache, and the new scripts didn't work with this setting. If you weren't disabling the cache before, there's no reason to do so just for this.
Make sure you're using the latest OmniMount container (run the command omni update
) because there were changes to both OmniStream and OmniMount (you listed a lot of things you ran but I'm just putting that here because that wasn't one of the ones you listed).
@kelinger It seems I am still not getting the rclone share to mount properly. I did a full reinstall (again), updated all packages with apt update/upgrade/autoremove
and omni update
, still nogo. Not sure why it's not working for me...?
Logs are below. It seems to still be keeping the "cloud" directory (and not creating any other directories for rclone) when you run omni clean
. I tried manually deleting it and running umount, which said the mount does not exist.
plexabyte@Plexabyte:~/OmniStream/components$ fusermount -uz cloud
fusermount: entry for /home/plexabyte/OmniStream/components/cloud not found in /etc/mtab
plexabyte@Plexabyte:~/OmniStream/components$ sudo umount -f cloud
umount: cloud: no mount point specified.
plexabyte@Plexabyte:~/OmniStream/components$ sudo umount -f google
umount: google: no mount point specified.
plexabyte@Plexabyte:~/OmniStream/components$ omni logs
oauth-hyper | time="2023-09-20T05:42:43Z" level=info msg="Listening on :4181"
traefik-hyper | time="2023-09-19T21:42:44-08:00" level=info msg="Configuration loaded from flags."
omnimount | rclone v1.64.0
omnimount | - os/version: debian 12.1 (64 bit)
omnimount | - os/kernel: 5.15.0-84-generic (x86_64)
omnimount | - os/type: linux
omnimount | - os/arch: amd64
omnimount | - go/version: go1.21.1
omnimount | - go/linking: static
omnimount | - go/tags: none
omnimount |
omnimount | mergerfs v2.37.1
omnimount |
omnimount | https://github.com/trapexit/mergerfs
omnimount | https://github.com/trapexit/support
omnimount |
omnimount | ISC License (ISC)
omnimount |
omnimount | Copyright 2023, Antonio SJ Musumeci <trapexit@spawn.link>
omnimount |
omnimount | Permission to use, copy, modify, and/or distribute this software for
omnimount | any purpose with or without fee is hereby granted, provided that the
omnimount | above copyright notice and this permission notice appear in all
omnimount | copies.
omnimount |
omnimount | THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
omnimount | WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
omnimount | WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE
omnimount | AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL
omnimount | DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR
omnimount | PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
omnimount | TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
omnimount | PERFORMANCE OF THIS SOFTWARE.
omnimount |
omnimount |
omnimount | Starting vnstat
omnimount | No interfaces found in database, adding available interfaces...
omnimount | Interface "eth0" added with 10000 Mbit bandwidth limit.
omnimount | -> 1 new interface found.
omnimount | Limits can be modified using the configuration file. See "man vnstat.conf".
omnimount | Unwanted interfaces can be removed from monitoring with "vnstat --remove".
omnimount |
omnimount | Configuration:
omnimount | MERGEMOUNT=cloud
omnimount | RCLONESERVICE=google
omnimount | RCLONEMOUNT=google
omnimount | UNSYNCED=unsynced
omnimount | UPLOADCACHE=uploadcache
omnimount | USENFS=false
omnimount | NFSREMOTE=
omnimount | NFSLOCAL=
omnimount | MEDIA=Media
omnimount | TURBOMAX=20
omnimount | LOCAL=google
omnimount | Adding group `omniuser' (GID 1000) ...
omnimount | Done.
omnimount | Adding user `omniuser' ...
omnimount | Adding new user `omniuser' (1000) with group `omniuser (1000)' ...
omnimount | Creating home directory `/home/omniuser' ...
omnimount | Copying files from `/etc/skel' ...
omnimount | Adding new user `omniuser' to supplemental / extra groups `users' ...
omnimount | Adding user `omniuser' to group `users' ...
omnimount | Cleaning up leftovers
omnimount | Starting services
omnimount | VFSMAX=100G
omnimount | VFSAGE=48h
omnimount | VFSPOLL=5m
omnimount | VFSREAD=2G
omnimount | VFSCACHE=yes
omnimount | DIRCACHE=96h
omnimount | NFS Disabled
omnimount | mkdir -p /mnt/google
omnimount | OmniMount Caching: enabled
omnimount | {
omnimount | "jobid": 1
omnimount | }
omnimount |
omnimount | Startup complpete
@kelinger @TechPerplexed Any update on this? Still unable to mount my cloud drives.
Can you share the output from rclone.log? This should be in your ~/OmniStream/logs directory (if the file is really long, I'm really just looking for the recent messages from the last failed mount attempts)
The log seems to be completely empty. It's also owned by root:root, which is interesting. That's probably a clue.
plexabyte@Plexabyte:~/OmniStream/logs$ sudo ls -la
total 88
drwxrwxr-x 2 plexabyte plexabyte 4096 Sep 20 04:10 .
drwxr-xr-x 12 plexabyte plexabyte 4096 Sep 23 02:18 ..
-rw-rw-r-- 1 plexabyte plexabyte 352 Sep 22 04:10 backup.log
-rw-r----- 1 root root 0 Sep 19 21:31 rclone.log
-rw-r--r-- 1 root root 38980 Sep 23 01:29 traefik-access.log
-rw-r--r-- 1 root root 21384 Sep 23 02:18 traefik.log
-rw-rw-r-- 1 plexabyte plexabyte 144 Sep 23 02:15 turbosync.log
plexabyte@Plexabyte:~/OmniStream/logs$ sudo cat rclone.log
plexabyte@Plexabyte:~/OmniStream/logs$
@kelinger Update? Not trying to be pushy but the software is completely unusable at this point
@meharrington90 I'm still investigating but its hard when I cannot duplicate your problem.
@kelinger I am using two servers (one for media hosting and one for downloading) and can give you access to them if that would help. PM me or email at meharrington90@gmail.com and I'll provide you some credentials.
Both servers are experiencing the same issue.
This should now be resolved. See #74
Note that the automated upgrade may require two down/up commands to integrate the changes to OmniStream (first time) and then the new upgrade process (second time). Alternatively, you can manually upgrade fuse on the host with sudo apt install -y fuse3
(though you should still upgrade OmniStream).
It appears the latest version of OmniMount has broken our plex stack again.
Is it possible to request a tag for the version you had released for Issue #69 This has worked and resolved all of our OmniMount access issues.
Thank you, RK Davies