Open tentacles-from-outer-space opened 1 year ago
Could it be the case from https://github.com/fstpackage/fst/issues/218?
Hi @tentacles-from-outer-space, thanks for your question!
Modern storage devices have increasingly large RAM caches that can be used to service multiple identical requests. That is why a correct benchmark should always read a unique file from disk on every iteration to avoid reading from SSD cache instead of the actual stored file.
Also, files can be cached in the OS and some storage devices have an initial startup time where they have to get out of a lower power mode and that can take some time too.
For your benchmark, you might consider first writing a large number (e.g. 100) of unique fst files to get a benchmark for writing and then reading those unique files to get a correct benchmark of the reading performance. With such a setup it's safe to assume that there are no caching effects :-)
I notice that in fresh session first use of
read_fst
is slower than next uses. The same file. The same number of cores. Just running the same command again.RStudio on server (80 cores), ~4GB file.
The first use is like twice slower.
Created on 2023-05-24 with reprex v2.0.2
Is it some kind of caching?
I run into this when I try to compare different setting of number of threads.