meilisearch / heed

A fully typed LMDB wrapper with minimum overhead 🐦
https://docs.rs/heed
MIT License
632 stars 57 forks source link

Introduce the `longer-keys` feature which sets `-DMDB_MAXKEYSIZE=0` #263

Closed tpunder closed 6 months ago

tpunder commented 6 months ago

This PR (which does not have a corresponding issue) adds a longer-keys feature to lmdb-master-sys and heed which sets -DMDB_MAXKEYSIZE=0 when compiling LMDB. This allows you to use keys that are larger than the default of 511 bytes long.

The feature is 100% opt-in.

I've added a conditional test to lmdb-master-sys and heed to check that keys larger than 511 can be successfully stored without returning an MDB_BAD_VALSIZE error.

Enabling this feature also allows you to use values larger than 511 in databases with the MDB_DUPSORT flag set (which was the problem I was running into).

Here is the documentation snippet from http://www.lmdb.tech/doc/group__internal.html:

define MDB_MAXKEYSIZE ((MDB_DEVEL) ? 0 : 511)

The max size of a key we can write, or 0 for computed max.

This macro should normally be left alone or set to 0. Note that a database with big keys or dupsort data cannot be reliably modified by a liblmdb which uses a smaller max. The default is 511 for backwards compat, or 0 when MDB_DEVEL.

Other values are allowed, for backwards compat. However: A value bigger than the computed max can break if you do not know what you are doing, and liblmdb <= 0.9.10 can break when modifying a DB with keys/dupsort data bigger than its max.

Data items in an MDB_DUPSORT database are also limited to this size, since they're actually keys of a sub-DB. Keys and MDB_DUPSORT data items must fit on a node in a regular page.

tpunder commented 6 months ago

@Kerollmops I think longer-keys makes sense since you don't get unlimited key sizes. The actual max key length is architecture dependent (related to the page size). I've seen values of 8126 bytes on my M1 Macbook Pro and 1982 bytes on my Intel Macbook Pro.

I've updated the PR to include the rename. I've also hooked up the mdb_env_get_maxkeysize function (as Env::max_key_size()) so people have a way to check what the value is for them. I've also updated the docs a bit.

It's not clear to me that using longer-keys actually increases the overhead (e.g. length bytes) of any data stored on disk. It looks like a short is always used for the key length of a node: https://github.com/LMDB/lmdb/blob/88d0531d3ec3a592cdd63ca77647d31c568c24bc/libraries/liblmdb/mdb.c#L1124

I think the database/file format is the same regardless of the MDB_MAXKEYSIZE.

I think the only real downsize would be that the database files become potentially non-portable across architectures with different page sizes if you are actually storing long keys. For example: If I create a database on my M1 Macbook Pro (key length limit of 8126 bytes) with a key of size 2000 bytes and then try to open that file on my Intel Macbook pro (key length limit 1982) then I think I would have problems. However, if all of my keys are under 1982 bytes then I don't think there is any problem.

tpunder commented 6 months ago

@Kerollmops I added a note about moving databases between different architectures to the feature flag. I think I also got your proposed changes from above.

tpunder commented 6 months ago

@Kerollmops No, when using the DUPSORT feature the data length is restricted to 8192/1982 bytes (or whatever it is for your architecture). DUPSORT values are stored as keys of a sub-db which means they must meet any key length limits.

tpunder commented 6 months ago

@Kerollmops I just ran cargo +nightly fmt to fix the formatting in lmdb_ffi.rs. Hopefully all is good now 😀