Open damb opened 4 years ago
Stats:
Test were performed using the docker setup from https://github.com/EIDA/mediatorws/tree/feature/in-memory-station-text/docker/prod at localhost
(i.e. granular endpoint requests; HTTP caching); logLevel=WARNING
N=100
Columns: Percentiles | Response start time | Response total time
(times in seconds)
Query: /fdsnws/station/1/query?net=CH&format=text
File based:
50%: [0.9462825 2.1556645]
85%: [0.9964951 2.26633 ]
90%: [1.0228163 2.3158552]
95%: [1.1145799 2.39628655]
99%: [2.04467591 3.22645922]
In-memory:
50%: [0.8970515 2.0515955]
85%: [0.92775625 2.11524265]
90%: [1.5534705 2.70055 ]
95%: [1.6321251 2.8170224]
99%: [2.27393176 3.44195183]
The difference is just marginal. Most probably there is still some kind of OS-level caching involved.
This is OS level disk caching.
This is OS level disk caching.
@kaestli, do you know the actual limits of OS-level disk caching? How does it scale with more files (most probably this is both hardware, OS and filesystem dependent, right)?
I know that linux typically uses the entire free physical RAM for caching (windows, at least non-server versions, not). It is widely configurable (e.g. https://unix.stackexchange.com/questions/30286/can-i-configure-my-linux-system-for-more-aggressive-file-system-caching); configuration is different for physical devices and NFS. Actual disk operation is FS-dependent, thus i guess also the effects of some config parameters. I think, at least for the current application, OS level optimization of the caching strategy may be overkill.
Features and Changes:
fdsnws-station-text
download task results into temporary files, keep them in-memory