Open w32zhong opened 6 years ago
Certainly possible. This is how droppy operated a long time ago. Caching was added because
Could add an option like noCache
to disable directory caching, but it won't be an easy change given how deeply integrated that cache (filetree.js
) is right now.
I see, but if really I want to host a reasonable large directory hierarchy, the directory caching is so expensive and often times I have to refresh to let newly created files show up.
Guess I will experiment with it. The cache indeed adds a lot of issues itself. Directory sizes will have to go, maybe I will add a very short cache (like 1 minute) for faster navigating back-and-forth. And search is probably also going to need some sort of cache.
often times I have to refresh to let newly created files show up
What kind of file system is that, and are you using pollingInterval
?
That is a great idea, maybe another approach would be just refreshing the cache by re-exploring most likely visited directories (e.g. recent visited and our guess on what is going to be visited) when the service is idle (when we are not dealing with requests).
I am not sure what is pollingInterval
you are mentioning, what I am doing is that I disable the watch feature so that there is a refresh button (https://github.com/silverwind/droppy/issues/327) to host my files.
My filesystem is ext4. I really wish Droppy can provide a way to disable cache, it really is the only one thing that makes me consider alternative easy but fast solutions.
There's a new upcoming performance improvement coming in node 10.10 which will speed up initial directory caching by 2-3 times at the cost of no longer having timestamps available.
I'm thinking to switch to that method and then retrieve timestamps asynchronously which would be the best of both worlds.
Downside is of course that I'd have to require node 10.10 or higher unless they backport that feature.
It sounds good to me, though I know it would be a mess to maintain two version of code (for compatibility), but if there is an easy path to reduce the directory caching, it can be appealing to branch the code and implement it and see how much speedup we can achieve. Again thank you for still looking at this issue.
I don't particularly care about old node versions, so I'd just release a new major version and require node >= 10.10 😉
This leads to simplicity of code. Looking forward to see how this change would affect loading speed.
I think I'll probably just factor our walker.js to its own module which could then maybe have a fallback.
Did this get abandoned? Looking to use droppy as a web UI for an rclone fuse mount, but being a network drive behind the scenes caching the entire directory tree is unwelcome. Is this something that could be disabled even with the latency in navigation suffering?
Any news ?
Wait, is it now possible to disable the caching completely? I am mounting multiple nfs shares with massive amounts of files into the container, caching never completes. I already set
watch: false
pollingInterval: 0
updateInterval: 0
but container keeps stuck on
[INFO] Caching files ...
forever!
No, you have to let it do the first caching
it always crashes:
droppy-prepress.1.v5z3bsermux8@swarm1 | 2020-03-27 01:58:07 [INFO] Caching files ...
droppy-prepress.1.v5z3bsermux8@swarm1 |
droppy-prepress.1.v5z3bsermux8@swarm1 | <--- Last few GCs --->
droppy-prepress.1.v5z3bsermux8@swarm1 |
droppy-prepress.1.v5z3bsermux8@swarm1 | [1:0x564695842e00] 1087936 ms: Scavenge 2045.7 (2049.2) -> 2044.9 (2049.7) MB, 4.0 / 0.0 ms (average mu = 0.213, current mu = 0.194) allocation failure
droppy-prepress.1.v5z3bsermux8@swarm1 | [1:0x564695842e00] 1089569 ms: Mark-sweep 2045.8 (2049.7) -> 2045.0 (2049.2) MB, 1591.4 / 0.0 ms (average mu = 0.130, current mu = 0.071) allocation failure scavenge might not succeed
droppy-prepress.1.v5z3bsermux8@swarm1 | [1:0x564695842e00] 1089621 ms: Scavenge 2045.8 (2049.2) -> 2045.3 (2050.2) MB, 5.3 / 0.0 ms (average mu = 0.130, current mu = 0.071) allocation failure
droppy-prepress.1.v5z3bsermux8@swarm1 |
droppy-prepress.1.v5z3bsermux8@swarm1 |
droppy-prepress.1.v5z3bsermux8@swarm1 | <--- JS stacktrace --->
droppy-prepress.1.v5z3bsermux8@swarm1 |
droppy-prepress.1.v5z3bsermux8@swarm1 | ==== JS stack trace =========================================
droppy-prepress.1.v5z3bsermux8@swarm1 | FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
droppy-prepress.1.v5z3bsermux8@swarm1 |
droppy-prepress.1.v5z3bsermux8@swarm1 | Security context: 0x35b77c61a2f1 <JSObject>
droppy-prepress.1.v5z3bsermux8@swarm1 | 0: builtin exit frame: concat(this=0x1db8ea22d869 <JSArray[708]>,0x1db8ea22db59 <JSArray[4]>,0x1db8ea22d869 <JSArray[708]>)
droppy-prepress.1.v5z3bsermux8@swarm1 |
droppy-prepress.1.v5z3bsermux8@swarm1 | 1: sync [0x28224c2ba5f9] [/droppy/node_modules/rrdir/index.js:~95] [pc=0x30ca9902f28c](this=0x28224c2ba5c1 <JSFunction module.exports (sfi = 0x28224c2b0259)>,0x3c0ddc9cf609 <String[39]: /files/some/folder/to/a/file>,0x3c0ddc9cf5d1 <Object map = 0x394f488...
memory on this vm is 32GB, not that low and I dont feel it's scraping on the border. Some way to set the javascript heap memory max ?
I have a huge directory tree, everytime I run droppy daemon it takes a long time (minutes). I think it will make a lot sense if droppy does not cache the entire directory hierarchy, instead, just to discover the directories on the fly.