Open Laharah opened 7 months ago
I have the same problem, but in Docker on Linux
I have the same problem in docker too
Download https://registry.npmjs.org/is-number/-/is-number-7.0.0.tgz error: Uncaught Error: Too many open files (os error 24) at new FsWatcher (ext:runtime/40_fs_events.js:23:17) at Object.watchFs (ext:runtime/40_fs_events.js:76:10) at ext:deno_node/_fs/_fs_watch.ts:58:21 at Object.action (ext:deno_web/02_timers.js:154:11) at handleTimerMacrotask (ext:deno_web/02_timers.js:68:10) at eventLoopTick (ext:core/01_core.js:160:21)
Managed to implement a workaround for this. While we wait for the PR to be accepted, you can clone the branch I made and then add this option to the end of your config.json
in your storage peer:
{
"type": "storage"
\\ ...,
"usePolling": true
}
If you try it out and run into an issue with it LMK.
Livesync-bridge crashes when there are 89-90 or more files.
This is the error I'm getting:
I wrote a script to delete files until the program didn't crash and about 90 files seems to be the number at which the program crashes. If there are less than 90 files in the storage
baseDir
, it will download files until it reaches 90 and then crashes.My current setup is a linux machine, trying to replicate to a folder on the same server. I'm not running in a docker instance, just directly from my terminal.
I did the obvious things first. I've set my file handle limit all the way up:
And just to double check:
Here's the current config I'm using:
Here also is the relevent portion of the stack trace where the error originates.
My current best guess is that
chokidar
has some kind of error. Reading into some issues it looks likechokidar
is supposed to combine inotify calls to reduce open handles, but it doesn't seem to be working for some reason. My TS isn't good enough for me to implement a workaround, and skimming the code, nothing jumps out at me as a cause.I suspect that there's something wrong with my server's configuration, since I'd imagine that I'd see another issue from someone else by now.
Please let me know if there's anything you can think of. I'm planning to try running livesync-bridge in a docker container, just to check, but I don't think it'll help since the underlying volume will be served from my filesystem. If the error originates below the container, it shouldn't matter if it's containerized or not. Still I'll give it a try in the morning just in case.