hasse69 / rar2fs

FUSE file system for reading RAR archives
https://hasse69.github.io/rar2fs/
GNU General Public License v3.0
272 stars 25 forks source link

Possible non-rar2fs issue - Need advice #173

Closed zeeter82 closed 2 years ago

zeeter82 commented 2 years ago

So here recently I'm running into another issue where Plex will start it's scheduled scans of my media pointed to my rar2fs mounts. After doing lots of troubleshooting and looking at Plex logs, it appears that a lot of time it will randomly start getting errors in scanning the files and then basically an entire path or most of an entire path will be marked "unavailable" even though I can still browse to the file and playback the file just fine in Plex. It just marks it as "unavailable" and puts the trash can icon on the item because it had issues during the scan/re-scan. So in theory this technically isn't breaking anything because the item is still playable in Plex, but I get left with a ton of items in my library with the trash can icon which is very annoying.....very hard on my sysadmin OCD. It also makes it to where I can't empty the trash in Plex because I don't want to accidentally remove anything and/or affect my DB with playback history etc.

I see a ton of these in my "Plex Media Server.log": WARN - Error scanning directory, we'll skip and continue: boost::filesystem::directory_iterator::construct: An unexpected network error occurred:

I also see a few of these in my syslog on VM: smbd[17617]: Too many open files, unable to open more! smbd's max open files = 16424

I'm just curious if maybe this is a file limits issue in my VM running rar2fs. Or maybe this is another issue entirely on my Windows box where I'm running my Plex server...something going on with Windows networking etc.

Any ideas? I posted something in the Plex Discord about this yesterday, but didn't get any replies yet.

zeeter82 commented 2 years ago

So I'm pretty sure my VM is getting this smbd file limits error and this is causing issues when scanning media in Plex....I can match up the timestamps of when the network errors happen in the Plex log with the smbd "too many open files" errors in syslog on the VM. I increased the max open files to 50000 but it's still erroring out....like it's not closing the files properly while scanning. I was told on Plex support discord that it shouldn't keep the files open while scanning so it shouldn't hit that high number. This only started happening in the last couple weeks, so not sure what else would have changed.

Any ideas?

meowkat commented 2 years ago

I too have this same issue, but I'm running rar2fs on ubuntu and sharing files out to a plex VM via NFS. This is very frustrating. I think this issue started around plex update Plex Media Server 1.25.0 . This is really an issue on how how plex scans media.

zeeter82 commented 2 years ago

Yep now that you mention it.... this definitely seems like an issue that started around the 1.25.x builds. Have you opened up a support case on Plex forums?

Here is mine if you want to comment there: https://forums.plex.tv/t/random-media-being-marked-unavailable-network-errors-in-logs-even-though-valid/765443/4

zeeter82 commented 2 years ago

Yea if there are settings I can change to workaround this, I would love to know what they are. I've tried with 50,000 and 150,000....both eventually hit the max and then the Plex scanning starts to error as well and marks media as "unavailable" when it actually is fine.

Dec 20 15:14:24 plex-vm smbd[17396]:   Too many open files, unable to open more!  smbd's max open files = 50000

Dec 20 16:15:20 plex-vm smbd[61299]:   Too many open files, unable to open more!  smbd's max open files = 150000
hasse69 commented 2 years ago

Since we have several recent reports of problems related to PMS in combination with rar2fs we cannot out-rule that this is some issue with rar2fs. When this happens can you see any rar2fs processes stuck in using 100% CPU? In that case it is the release (close) that fails and never completes, obviously that is going to cause the number of open files to constantly grow.

zeeter82 commented 2 years ago

Since we have several recent reports of problems related to PMS in combination with rar2fs we cannot out-rule that this is some issue with rar2fs. When this happens can you see any rar2fs processes stuck in using 100% CPU? In that case it is the release (close) that fails and never completes, obviously that is going to cause the number of open files to constantly grow.

I didn't really notice this, but also wasn't really looking for this either. How can I easily check this? Would htop work? I have 12 rar2fs mounts running in my VM so I would need to watch them closely as the Plex scanner changes paths.

hasse69 commented 2 years ago

The htop should work just fine. And not sure you need to monitor anything. A process that is stuck is stuck. Thus once you start to get this error about too many open files it would be an excellent time to check your current process status.

There is another thing you can try but that is probably going to be a bit tedious since it would involve identifying all your compressed archives and then move them out of your library. If PMS stops working also for a pure non-compressed RAR library then this is most likely a PMS issue and not so much rar2fs.

zeeter82 commented 2 years ago

Thanks @hasse69! I'll start another rescan soon and check it.

hasse69 commented 2 years ago

Don't thank me yet! If it is the stuck process issue you are observing then it has also been stuck in my TODO-list.

zeeter82 commented 2 years ago

Haha yep no worries. I just meant thanks in general with the software and support.

zeeter82 commented 2 years ago

So I sort of watched a scan this morning. I was scanning a fairly large movie path and the highest CPU I saw was around 78% and that kept jumping around. I never saw it get stuck on 100%. But then I did notice that it kind of halted on going to the next path...like it seemed to slow down and use less CPU/cores (or maybe 0 percent) for about 10-15 seconds and then started up again when switching to the next path. It was at this slow down that I checked this:


root@plex-vm:/home/zeeter# less /var/log/syslog | grep 'Too many'
Dec 21 07:45:44 ftz-plex smbd[81920]:   Too many open files, unable to open more!  smbd's max open files = 150000
Dec 21 07:45:45 ftz-plex smbd[81920]:   Too many open files, unable to open more!  smbd's max open files = 150000

So something about the Plex scanner or rar2fs is hanging up somewhere causing it not to release files, but I'm not sure how to continue troubleshooting or next steps. I'll be happy to send logs if needed, but I won't share them publicly.

zeeter82 commented 2 years ago

I also noticed that there are 12 main PID's and each one of those appears to have 11 child PID's....so there are like 12 total PID's per mount. I'm going to assume this is normal.

hasse69 commented 2 years ago

I also noticed that there are 12 main PID's and each one of those appears to have 11 child PID's....so there are like 12 total PID's per mount. I'm going to assume this is normal.

Yes, this is normal when FUSE is running in MT mode.

hasse69 commented 2 years ago

So, this one is a bit tricky. But we can add some instrumentation in rar2fs to count the number of open vs close/release calls.

Btw, are you running PMS on Windows? I remember when adapting and implementing support for WinFSP that Windows is a bit more reluctant to close files than if you compare to Linux. There is however a simple way to force those open file handles out of the cache and trigger the close operations. Need to check that.

hasse69 commented 2 years ago

Exactly what version of rar2fs are you using so that I can prepare a small patch.

zeeter82 commented 2 years ago

The previous git + the race patch....which I guess is current master now. Yes I'm running PMS in Windows.

zeeter82 commented 2 years ago

My cmd line options for all my mounts are like this:

rar2fs -owarmup -o allow_other --seek-length=1 /src/path /mnt/path

I can change these options around to something else if that might help troubleshoot as well.

zeeter82 commented 2 years ago

FYI - I enabled the PMS option last night to "run the scanner tasks at lower priority" thinking this might do something differently (haha just hoping really)...a new scan was performed this morning and still has random "too many files open" so this didn't make a difference.

hasse69 commented 2 years ago

Try this patch, after 10k opened files it will start to log to syslog every second the current status of open vs release calls.

test.patch.txt

zeeter82 commented 2 years ago

Rebuilt using patch....just started a new scan and within the first minute:

Dec 22 13:16:22 plex-vm smbd[17469]: [2021/12/22 13:16:22.215503,  0] ../source3/smbd/open.c:801(fd_open)
Dec 22 13:16:22 plex-vm smbd[17469]:   Too many open files, unable to open more!  smbd's max open files = 150000
zeeter82 commented 2 years ago

Latest syslog....some of it I didn't include but I did include where it started the scan and where it finished.

Dec 22 13:16:22 plex-vm smbd[17469]: [2021/12/22 13:16:22.215503,  0] ../source3/smbd/open.c:801(fd_open)
Dec 22 13:16:22 plex-vm smbd[17469]:   Too many open files, unable to open more!  smbd's max open files = 150000
Dec 22 13:17:01 plex-vm CRON[17847]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Dec 22 13:17:41 plex-vm systemd[1]: Starting Cleanup of Temporary Directories...
Dec 22 13:17:41 plex-vm systemd-tmpfiles[18087]: [/usr/lib/tmpfiles.d/fail2ban-tmpfiles.conf:1] Line references path below legacy directory /var/run/, updatin$
Dec 22 13:17:41 plex-vm systemd[1]: systemd-tmpfiles-clean.service: Succeeded.
Dec 22 13:17:41 plex-vm systemd[1]: Started Cleanup of Temporary Directories.
Dec 22 13:18:12 plex-vm smbd[17469]: [2021/12/22 13:18:12.480172,  0] ../source3/smbd/open.c:801(fd_open)
Dec 22 13:18:12 plex-vm smbd[17469]:   Too many open files, unable to open more!  smbd's max open files = 150000
Dec 22 13:21:02 plex-vm rar2fs[2828]: open: 10013, release: 9364, release done: 9364
Dec 22 13:21:03 plex-vm rar2fs[2828]: open: 10077, release: 9384, release done: 9384
Dec 22 13:21:04 plex-vm rar2fs[2828]: open: 10149, release: 9384, release done: 9384
Dec 22 13:21:05 plex-vm rar2fs[2828]: open: 10225, release: 9384, release done: 9384
Dec 22 13:21:06 plex-vm rar2fs[2828]: open: 10288, release: 9434, release done: 9434
Dec 22 13:21:07 plex-vm rar2fs[2828]: open: 10346, release: 9709, release done: 9709
Dec 22 13:21:08 plex-vm rar2fs[2828]: open: 10394, release: 9709, release done: 9709
Dec 22 13:21:09 plex-vm rar2fs[2828]: open: 10440, release: 9709, release done: 9709
Dec 22 13:21:10 plex-vm rar2fs[2828]: open: 10506, release: 9709, release done: 9709
Dec 22 13:21:11 plex-vm rar2fs[2828]: open: 10551, release: 9709, release done: 9709
Dec 22 13:21:12 plex-vm rar2fs[2828]: open: 10594, release: 9958, release done: 9957
Dec 22 13:21:13 plex-vm rar2fs[2828]: open: 10654, release: 10033, release done: 10033
Dec 22 13:21:14 plex-vm rar2fs[2828]: open: 10717, release: 10033, release done: 10033
Dec 22 13:21:15 plex-vm rar2fs[2828]: open: 10777, release: 10033, release done: 10033
Dec 22 13:21:16 plex-vm rar2fs[2828]: open: 10821, release: 10033, release done: 10033
Dec 22 13:21:17 plex-vm rar2fs[2828]: open: 10843, release: 10305, release done: 10304
Dec 22 13:21:18 plex-vm rar2fs[2828]: open: 10867, release: 10354, release done: 10354
Dec 22 13:21:19 plex-vm rar2fs[2828]: open: 10903, release: 10354, release done: 10354
Dec 22 13:21:20 plex-vm rar2fs[2828]: open: 10971, release: 10354, release done: 10354
Dec 22 13:21:21 plex-vm rar2fs[2828]: open: 11041, release: 10354, release done: 10354
Dec 22 13:21:22 plex-vm rar2fs[2828]: open: 11116, release: 10361, release done: 10361
Dec 22 13:21:23 plex-vm rar2fs[2828]: open: 11164, release: 10646, release done: 10646
Dec 22 13:21:24 plex-vm rar2fs[2828]: open: 11230, release: 10658, release done: 10658
Dec 22 13:21:25 plex-vm rar2fs[2828]: open: 11308, release: 10658, release done: 10658
Dec 22 13:21:26 plex-vm rar2fs[2828]: open: 11388, release: 10658, release done: 10658
Dec 22 13:21:27 plex-vm rar2fs[2828]: open: 11446, release: 10658, release done: 10658
Dec 22 13:21:28 plex-vm rar2fs[2828]: open: 11505, release: 10867, release done: 10866
Dec 22 13:21:29 plex-vm rar2fs[2828]: open: 11566, release: 10867, release done: 10867
Dec 22 13:21:30 plex-vm rar2fs[2828]: open: 11634, release: 10867, release done: 10867
Dec 22 13:21:31 plex-vm rar2fs[2828]: open: 11696, release: 10867, release done: 10867
Dec 22 13:21:32 plex-vm rar2fs[2828]: open: 11772, release: 10867, release done: 10867
Dec 22 13:21:33 plex-vm rar2fs[2828]: open: 11836, release: 10964, release done: 10964
Dec 22 13:21:34 plex-vm rar2fs[2828]: open: 11890, release: 11217, release done: 11217
Dec 22 13:21:35 plex-vm rar2fs[2828]: open: 11960, release: 11217, release done: 11217
Dec 22 13:21:36 plex-vm rar2fs[2828]: open: 12015, release: 11217, release done: 11217
Dec 22 13:21:37 plex-vm rar2fs[2828]: open: 12084, release: 11217, release done: 11217
Dec 22 13:21:38 plex-vm rar2fs[2828]: open: 12164, release: 11217, release done: 11217
Dec 22 13:21:39 plex-vm rar2fs[2828]: open: 12196, release: 11403, release done: 11403
Dec 22 13:21:40 plex-vm rar2fs[2828]: open: 12196, release: 11403, release done: 11403
Dec 22 13:21:41 plex-vm rar2fs[2828]: open: 12196, release: 11403, release done: 11403
Dec 22 13:21:42 plex-vm rar2fs[2828]: open: 12196, release: 11403, release done: 11403
Dec 22 13:21:43 plex-vm rar2fs[2828]: open: 12196, release: 11403, release done: 11403
Dec 22 13:21:44 plex-vm rar2fs[2828]: open: 12196, release: 11403, release done: 11403
Dec 22 13:21:45 plex-vm rar2fs[2828]: open: 12196, release: 11403, release done: 11403
Dec 22 13:21:46 plex-vm rar2fs[2828]: open: 12196, release: 11403, release done: 11403
Dec 22 13:21:47 plex-vm rar2fs[2828]: open: 12196, release: 11403, release done: 11403
Dec 22 13:21:48 plex-vm rar2fs[2828]: open: 12196, release: 11403, release done: 11403
Dec 22 13:21:49 plex-vm rar2fs[2828]: open: 12196, release: 11403, release done: 11403
Dec 22 13:21:50 plex-vm rar2fs[2828]: open: 12196, release: 11403, release done: 11403
Dec 22 13:21:51 plex-vm rar2fs[2828]: open: 12196, release: 11403, release done: 11403
Dec 22 13:21:52 plex-vm rar2fs[2828]: open: 12196, release: 11403, release done: 11403
Dec 22 13:21:53 plex-vm rar2fs[2828]: open: 12196, release: 11403, release done: 11403
Dec 22 13:21:54 plex-vm rar2fs[2828]: open: 12196, release: 11403, release done: 11403
Dec 22 13:21:55 plex-vm rar2fs[2828]: open: 12196, release: 11403, release done: 11403
Dec 22 13:21:56 plex-vm rar2fs[2828]: open: 12196, release: 11403, release done: 11403
Dec 22 13:21:57 plex-vm rar2fs[2828]: open: 12196, release: 11403, release done: 11403
Dec 22 13:21:58 plex-vm rar2fs[2828]: open: 12196, release: 11403, release done: 11403
-------
Dec 22 13:23:49 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:23:50 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:23:51 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:23:52 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:23:53 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:23:54 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:23:55 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:23:55 plex-vm smbd[17469]: [2021/12/22 13:23:55.878758,  0] ../source3/smbd/open.c:801(fd_open)
Dec 22 13:23:55 plex-vm smbd[17469]:   Too many open files, unable to open more!  smbd's max open files = 150000
Dec 22 13:23:56 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:23:57 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:23:58 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:23:59 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:24:00 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:24:01 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:24:02 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:24:03 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:24:04 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:24:05 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
-------
Dec 22 13:44:46 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:44:47 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:44:48 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:44:49 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:44:50 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:44:51 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:44:52 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:44:53 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:44:54 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:44:55 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:44:56 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:44:57 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:44:58 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:44:59 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:45:00 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:45:01 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:45:02 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:45:03 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:45:04 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:45:05 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:45:06 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
Dec 22 13:45:07 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208
zeeter82 commented 2 years ago

The parts of the log that I didn't include (notated with ------) were just a lot of rar2fs repeats....let me know if you need to see the entire thing.

hasse69 commented 2 years ago

So, not to jump into any conclusions here but from what I can tell all open calls made are also released and completed. Then suddenly no more files are being opened which is rather clear from the other error you get due too to many open files. Thus, from this there is nothing to indicate a problem with rar2fs.

zeeter82 commented 2 years ago

Yep this what I was afraid of. I don't really know how to go about trying to get Plex to fix this or troubleshoot on their end (and of course I would need to keep my logs private and not shared publicly if they request them). I've already opened up a forum post and asked on Discord. - Not really any luck there so far.

I was going to try upgrading my VM to Deb11 (which I was running before during the race issue and then rolled back as part of troubleshooting). I mean it's a shot in the dark I guess, but at least I'm trying something different.

hasse69 commented 2 years ago

Do you have the 'lsof' command on your VM? That can show you what files/resources that are still open.

hasse69 commented 2 years ago

One thing that is iteresting is of course this

Dec 22 13:21:58 plex-vm rar2fs[2828]: open: 12196, release: 11403, release done: 11403
-------
Dec 22 13:23:49 plex-vm rar2fs[2828]: open: 12208, release: 12208, release done: 12208

So, at this point there is ~800 open files, in about 2 minutes not that many more are opened but that is also the time it takes roughly for rar2fs to catch up and release them. Having to process 800 files is a lot, possibly this could cause some timeout that is fooling smbd to think files are still open? That you get this error from smbd directly and not from the kernel itself indicates it is some user-space monitoring of local resources that fails.

hasse69 commented 2 years ago

Also, did you check out this one? https://serverfault.com/questions/325608/samba-stuck-at-maximum-of-1024-open-files

hasse69 commented 2 years ago

Also, try without the -owarmup option. Does anything improve?

zeeter82 commented 2 years ago

Also, did you check out this one? https://serverfault.com/questions/325608/samba-stuck-at-maximum-of-1024-open-files

Yes I already tried this and have my samba limit set to 150000 now.

zeeter82 commented 2 years ago

So upgrading to Deb 11 didn't fix anything, but.....

I removed the -owarmup option on all my mounts, launched Plex and started a new scan. I hope I don't jinx it, but it's been running over 15min now with no issues...

I'll post another update after it completes.

zeeter82 commented 2 years ago

It completed the scan successfully...no errors.

The -owarmup option appears to have been the culprit in my scenario. I had this option enabled because as I understood it, this would help with caching and may increase performance of the scans. But I could be wrong. The next scheduled scan will be 12 hours from now, so let's let that finish without errors before calling this 100% resolved, but I have faith now.

I guess the -owarmup option might still need to be investigated, but maybe this issue is isolated to my environment/config.

alfigast commented 2 years ago

I've had exactly the same problem before and just recently solved it with the new patch. I do not think Plex devs will help you out much as I have tried that before.

What solved it for me was a couple of things. I'm running latest master rar2fs on an ubuntu VM hosted on a win10 with 12 network shares. Using "rar2fs -o allow_other,uid=xyz,warmup --seek-length=1 /mnt/ /mntX/" -Upgrading to the latest rar2fs master stopped crashes when this occurred -Changed from mounting all shares in rar2fs to mounting a single common folder in rar2fs for all shares -Changed the DefaultLimitNOFILE to 65535 in /etc/systemd/user.conf

Maybe you've tried this already but I thought I comment anyways.

hasse69 commented 2 years ago

In this use-case/setup you run rar2fs in a VM but all your archives are accessed across a SMB share? If you mount across a network rather than a local file system you need to be restrictive with the warmup option. Especially if you run as many mounts as you seem to do. You can still use the warmup but you probably need to reduce the number of threads it is spawning by trying eg -owarmup=2 and then work your way up towards the point your system cannot handle more. The default is 5 or 6 if I recall correctly.

zeeter82 commented 2 years ago

Thanks for the tips and all the help @hasse69. I'll change it to "=2" and see how that goes.

Hope you and your family have a great holiday!

Thanks again everyone in this thread for tips and suggestions.

zeeter82 commented 2 years ago

So I don't think I'll be able to use the -owarmup option with my current setup. I tried "=2" and still getting errors for too many open files.

hasse69 commented 2 years ago

Ok thanks for feedback! A bit strange though, the warmup should not really cause any file descriptor leaks since all it does is scan each archive it finds one by one. Cannot see any descriptor leaks in my test runs.

zeeter82 commented 2 years ago

I've had exactly the same problem before and just recently solved it with the new patch. I do not think Plex devs will help you out much as I have tried that before.

What solved it for me was a couple of things. I'm running latest master rar2fs on an ubuntu VM hosted on a win10 with 12 network shares. Using "rar2fs -o allow_other,uid=xyz,warmup --seek-length=1 /mnt/ /mntX/" -Upgrading to the latest rar2fs master stopped crashes when this occurred -Changed from mounting all shares in rar2fs to mounting a single common folder in rar2fs for all shares -Changed the DefaultLimitNOFILE to 65535 in /etc/systemd/user.conf

Maybe you've tried this already but I thought I comment anyways.

@alfigast I'm trying to duplicate your config by having one main rar2fs mount, but I'm not sure if I can do that with my setup. I tried making a folder and then created a symbolic link to one of my shares...then I tried making this folder mount under rar2fs. I added it Samba and I see it, but I can't access the folder inside windows (access denied). I tried messing with allow_root and allow_other and setting my uid/gid, but still no luck.

How did you go about getting your shares in one folder to use with rar2fs?

alfigast commented 2 years ago

My setup may not be the most efficient one but it works for me. I have Win10>VM(Ubuntu+rar2fs)>Win10(Plex).

Not sure why it would be any different from before by just changing to having a common mount. For me at least its simply mounting everything in fstab to a specific folder which I then mount in rar2fs.

fstab //ip/a/Movies1 /mnt/Movies1 cifs credentials=/target/.smbcredentials,defaults 0 0 //ip/b/Movies2 /mnt/Movies2 cifs credentials=/target/.smbcredentials,defaults 0 0 .. rar2fs /rar2fs -o allow_other,uid=WinCredentials,warmup --seek-length=1 /mnt/ /target/

zeeter82 commented 2 years ago

I'll play around with my settings to see if I can improve my setup.

zeeter82 commented 2 years ago

So I was able to consolidate my setup, but still getting the same issue when using -owarmup.

I only have 1 main mount folder now that contains all my other mounts...so only need one rar2fs instance. I pretty much instantly got the too many open files error when starting a Plex scan.

@hasse69 Anything else I can try that would let me still use the -owarmup option? I don't understand why it's still doing it if I only have one rar2fs instance now.

zeeter82 commented 2 years ago

Wow so this new config with only 1 rar2fs instance....I still get too many open files error even when not using the -owarmup option.

So weird....

hasse69 commented 2 years ago

Something is really fishy here. I am currently clueless to what is happening on your setup. I have tried a few rather huge mounts locally and nothing is causing more than 10-15 extra files to be opened while warmup is in action. Why is it so that accessing files over SMB would be any different? I am having a hard time seeing this being a rar2fs issue to be honest, but not less "interesting". Would really like to understand what is going on here...

hasse69 commented 2 years ago

I know it is Christmas Eve and all...but, when you get the time, can you please capture another syslog now that you have only one mount?

That this problem triggers for one mount even without warmup indicates it has nothing to do with it really.

Also what you can do/test with multiple mounts is to stop PMS, then mount everything and wait for warmup to complete (syslog tells you when it is) then start PMS. What happens?

I still for some reason believe there is something here with how Windows tends to keep files open in combination with the new PMS scanner.

EDIT: All the tests I have made locally have had the actual archives mounted across a SMB share between my NAS and my Linux server running rar2fs. I observe no issues what so ever. There is something here that we are not seeing...

zeeter82 commented 2 years ago

I'm back to my setup with multiple mounts, and I usually wait for the warmup to complete anyways before starting up Plex. I setup my rar2fs mount script in systemd so it auto starts up with my VM and I can easily see status. I also have some batch scripts to give me some info in Windows about if the mounts are accessible and valid and check timestamps on Plex logs to verify around when last scan occured.

I'm totally with you and this is really frustrating. Maybe a month or two ago I had no issue using the warmup option with multiple mounts. I'm honestly thinking about recreating my VM from scratch. I keep my VM fairly clean though and take snapshots before making any crazy config changes so I can roll back easily if necessary.

zeeter82 commented 2 years ago

I just completely rebuilt a new Ubuntu 20.04 LTS Server VM....set it up for one rar2fs instance and still getting too many open files issue......So I'm guessing now this is either a Windows and/or PMS issue. No clue where to go from here. I'm not really in a position to rebuild this box yet.

hasse69 commented 2 years ago

So if you revert to some older version of PMS, is this issue no longer observed? Not saying this is a PMS issue, only that something is different and whatever it is seems relevant.

zeeter82 commented 2 years ago

Sorry for the long delay in replying. I hope everyone had a good Xmas and has a great 2022!

So I've been doing all kinds of troubleshooting. I was afraid to go back too far with PMS versions because I believe recent versions made changes to the DB and I didn't want to negatively affect that way by going back to a version that no longer could read the DB correctly (schema changes?).

I also did an in-place upgrade to Win11 and this didn't really help either. I then changed around my mount folders, a little bit of renaming/cleanup, and then added these new folders to Plex and let rescan everything...took a couple days for this. I then removed the old folders from Plex after verifying mostly everything and optimized/cleaned the DB. Everything appeared to be working better now after doing this. And then I ran into another issue where stuff would slow down after awhile. Like after I started up my VM and got the rar2fs mounts loaded, everything would be nice and fast....a couple hours later and/or after the next media scan....it would crawl. Like scans would still be continuing for hours or it would buffer while playing back media locally. I sort of figured this might be something related to Win11 since this was a new symptom that only showed up after going to Win11.

So I decided to do the system roll back to Win10...this failed to where explorer.exe wouldn't load and I couldn't get a desktop. I had to manaully run apps/processes from task manager. So I got everything backed up and completely rebuit OS with a fresh Win10 21H2.

So far everything is running much better overall. I don't see anymore warnings/errors that look like they are directly related to SMB/rar2fs. I have noticed some new kernel warnings in syslog though that I'm hoping are just warnings and not something I need to worry about in the long run:

Dec 30 16:48:51 plex-vm kernel: [27586.835364] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:57c0c
Dec 30 16:48:51 plex-vm kernel: [27587.220258] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:57f7e
Dec 30 16:48:51 plex-vm kernel: [27587.313025] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:5805b
Dec 30 16:48:52 plex-vm kernel: [27587.755115] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:58474
Dec 30 16:48:52 plex-vm kernel: [27587.829609] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:58509
Dec 30 16:48:52 plex-vm kernel: [27587.895068] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:585a1
Dec 30 16:48:52 plex-vm kernel: [27587.997024] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:586a6
Dec 30 16:48:52 plex-vm kernel: [27588.276708] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:58939
Dec 30 16:48:53 plex-vm kernel: [27588.557879] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:58bcc
Dec 30 16:49:04 plex-vm kernel: [27600.411703] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:5f4e6
Dec 30 16:49:05 plex-vm kernel: [27600.594100] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:5f62c
Dec 30 16:49:06 plex-vm kernel: [27601.942842] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:5feef
Dec 30 16:49:06 plex-vm kernel: [27602.016115] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:5ff85
Dec 30 16:49:06 plex-vm kernel: [27602.087350] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:60018
Dec 30 16:49:06 plex-vm kernel: [27602.159639] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:600ad
Dec 30 16:49:06 plex-vm kernel: [27602.229728] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:60146
Dec 30 16:49:06 plex-vm kernel: [27602.234945] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:60148
Dec 30 16:49:06 plex-vm kernel: [27602.301333] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:601db
Dec 30 16:49:06 plex-vm kernel: [27602.309104] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:601dd
Dec 30 16:49:09 plex-vm kernel: [27605.435941] smb2_cancelled_close_fid: 36 callbacks suppressed
Dec 30 16:49:09 plex-vm kernel: [27605.435964] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:61adb
Dec 30 16:49:10 plex-vm kernel: [27605.510165] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:61b6e
Dec 30 16:49:10 plex-vm kernel: [27605.583391] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:61c01
Dec 30 16:49:10 plex-vm kernel: [27605.656544] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:61c94
Dec 30 16:49:10 plex-vm kernel: [27605.729004] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:61d27
Dec 30 16:49:10 plex-vm kernel: [27605.815509] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:61dba
Dec 30 16:49:10 plex-vm kernel: [27605.888275] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:61e4d
Dec 30 16:49:10 plex-vm kernel: [27605.960932] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:61ee0
Dec 30 16:49:24 plex-vm kernel: [27619.742186] CIFS: VFS: \\mnt-pth\pth1 Close unmatched open for MID:68670
Dec 30 16:51:12 plex-vm kernel: [27727.500281] CIFS: VFS: \\mnt-pth\pth2 Close unmatched open for MID:b6df8
Dec 30 16:51:12 plex-vm kernel: [27728.215515] CIFS: VFS: \\mnt-pth\pth2 Close unmatched open for MID:b74ec
Dec 30 16:51:15 plex-vm kernel: [27731.306372] CIFS: VFS: \\mnt-pth\pth2 Close unmatched open for MID:b8b75
Dec 30 16:51:16 plex-vm kernel: [27731.501115] CIFS: VFS: \\mnt-pth\pth2 Close unmatched open for MID:b8d36
Dec 30 16:51:16 plex-vm kernel: [27731.565504] CIFS: VFS: \\mnt-pth\pth2 Close unmatched open for MID:b8dc9
Dec 30 16:51:16 plex-vm kernel: [27731.768661] CIFS: VFS: \\mnt-pth\pth2 Close unmatched open for MID:b8f7e

I see quite a few of those, usually it happens while Plex is rescanning. So I'm guessing this is coming from the way I have my mounts setup in /etc/fstab.

meowkat commented 2 years ago

Hi All,

I have no insight into solving this issue but I was able remedy the issue by restoring my plex VM from a backup taken a few months ago. I first attempted to reinstall an older version of plex on top of the current version, but due to the recent database changes made all librarys became corrupt and unusable. After restoring the entire plex VM back to plex version 1.24.2.4973 things worked as normal.

My currently working setup ///////////////////////////////////// rar2fs VM:

rar2fs v1.22.0 (DLL version 7) FUSE library version: 2.9.2 fusermount version: 2.9.2 using FUSE kernel interface version 7.19

OS: Description: Ubuntu 14.04.6 LTS Release: 14.04 Codename: trusty

network share: NFSv4

///////////////////////////////////// plex VM post roll back

Plex version: 1.24.2.4973

OS: Description: Ubuntu 18.04.4 LTS Release: 18.04 Codename: bionic

network share: NFSv4

hasse69 commented 2 years ago

That is a pretty old version of rar2fs. Difficult to conclude anything unless you use some more recent version.

zeeter82 commented 2 years ago

Quick update: So after rebuilding OS on my box (Win10 21H2), I'm back to running my original Debian 11 VM and it's been fairly stable so far with no major issues for about 2 weeks now.