cberner / redb

An embedded key-value database in pure Rust
https://www.redb.org
Apache License 2.0
3.19k stars 145 forks source link

What is the best way to bulk load redb? #822

Closed marvin-j97 closed 2 months ago

marvin-j97 commented 2 months ago

I am trying to benchmark large data sets in https://github.com/marvin-j97/rust-storage-bench, so I want to load a lot of data very quickly, no matter the durability.

If I use Durability::None, that bloats the disk size, which already caused me to run of disk space more than once, so I started doing an Immediate flush here and there:

// Note the keys are written in monotonically increasing order.
for x in 0..item_count {
    db.insert(
        key,
        value,
        // NOTE: Avoid too much disk space building up...
        args.backend == Backend::Redb && x % 100_000 == 0,
    );
}

With the above code, writing 100M 16 byte keys and 64 byte values takes 130 minutes, which is about 78µs per insert, which is slower than the fsync time of my SSD (pm9a3), so it looks like there is no advantage to write with Durability=None. So is there any point in using None at all?

Additionally, for this comparatively small data set (it's just ~8 GB of user data), redb has written 4.4 TB (write amp = 540), with the resulting .redb file being ~28GB.

As a comparison

What is the best way to write a lot of KVs without bloating disk space, while keeping inserts somewhat fast?

cberner commented 2 months ago

It's best to insert them all in a single transaction, and then the durability won't matter. Here's an example: https://github.com/cberner/redb/blob/7377b3b37edcfa8438fb617543dc8df22a122fff/benches/lmdb_benchmark.rs#L79-L88

If you're able to insert them in sorted order, that might improve write amplification. Alternately, you can adjust the cache size, if you have enough RAM

marvin-j97 commented 2 months ago

That works better, down to ~2.74µs per item