Currently, the active db file (series) is not compressed; only the backups are.
Also compressing the active one has several benefits.
Reading the compressed file is very fast. With zstd, it's negligible. (it it's on a "slow" medium, it's probably faster than uncompressed, but I doubt that's the case nowadays)
When writing the series file we already need to compress 1 file (the .1 backup).
With "sparse" backups (for #42), we don't need to compress anything.
Rolling back a backup is essentially a no-op.
Ideally, we should figure out how to reading and decompressing at the same time, i.e. the equivalent of zstd -d -c file.zst | jless. Or, at the very least, test if that is faster. That being said, orjson doesn't support reading from a file descriptor, so maybe this would be pointless.
Currently, the active db file (
series
) is not compressed; only the backups are. Also compressing the active one has several benefits.zstd
, it's negligible. (it it's on a "slow" medium, it's probably faster than uncompressed, but I doubt that's the case nowadays)Ideally, we should figure out how to reading and decompressing at the same time, i.e. the equivalent of
zstd -d -c file.zst | jless
. Or, at the very least, test if that is faster. That being said,orjson
doesn't support reading from a file descriptor, so maybe this would be pointless.Preferrably, this should be done before #42.