vulcanize / cosmos-sdk

:chains: A Framework for Building High Value Public Blockchains :sparkles:
https://cosmos.network/
Other
0 stars 1 forks source link

Deep dive into underlying databases: RocksDB and BadgerDB #10

Closed i-norden closed 3 years ago

i-norden commented 3 years ago

Outline precisely how we will support both using the MapStore interface (with batching, versioning, and DB snapshotting).

roysc commented 3 years ago

ADR-040: Updates for SMT

As part of the proposed ADR-040, we want to update the SMT data structure so it can use Tendermint DB as a node database. This will entail implementing the MapStore interface for tm-db and its new backends, RocksDB and BadgerDB.

Batching

Batched writes are already implemented in tm-db for both backends. These could simply be wrapped in a method on SmapStore, but there is no clear benefit to using them in the SMT for the currently implemented operations. However, if we want to implement efficient batched writes to the SMT itself, that could also be supported in the following way.

The tree will need to be able to read as well as write to the batch itself while building it, using something closer to a transaction object.

These types would be wrapped in the MapStore interface. The SMT would need a new interface type to represent the batch (e.g. WriteBatch). This would wrap a derived SparseMerkleTree object using the transaction object as its MapStore.

Versioning & snapshots

Creating state sync snapshots is not required for SMT data. https://github.com/cosmos/cosmos-sdk/pull/8430#discussion_r626879836.

It seems versioning is also not needed, but it's not entirely clear. However, if so, this could be implemented with a MapStore.GetAt([]byte, uint64) method, which forwards to a (new) tmdb.DB.GetAt([]byte, uint64) method, implemented like so: