FuelLabs / fuel-core

Rust full node implementation of the Fuel v2 protocol.
Other
58.32k stars 2.72k forks source link

State Rewind #451

Open Voxelot opened 2 years ago

Voxelot commented 2 years ago

The database in fuel-core needs the ability to rewind state to a previous point in time. This will allow nodes to contest invalid state transitions on L1 via IVG fraud-proofs.

Each atomic batch-write to the database is composed of either insert (column id: u32, key: Vec<u8>, value: Vec<u8>) or delete operations (column id: u32, key: Vec<u8>). By keeping a history of these batches, we'll be able to create an in-memory overlay over the database where batches are used to rewind from the current state tip. Both insert and remove operations will now also need to include the previous value associated with the key in order to be rewound: (column id: u32, key: Vec<u8>, value: Vec<u8>, prev_value: Vec<u8>) & (column id: u32, key: Vec<u8>, prev_value: Vec<u8>)

Add a new column to the database that contains a serialized blob of these operations, bundled together by block height:

In order to prevent ongoing state changes (e.g. new blocks committed during an IVG session) from altering the rewind overlay, we'll also need the ability to snapshot the current database state at the start of the rewind operation. This should be fairly efficient as RocksDB can make the snapshot from symlinks as long as the snapshot is on the same filesystem as the database: http://rocksdb.org/blog/2015/11/10/use-checkpoints-for-efficient-snapshots.html

The in-memory database will also need to support state rewind, but it could just clone all the data for snapshotting since it doesn't need to be very efficient and we don't expect the in-memory db to achieve significant usage outside of short-lived integ tests.

rakita commented 2 years ago

Blockchain database growth is the one of major problems in space that yet needs to be solved. There is two parts here, one is state growth (how much of current UTXO we have and contract slots), and second is history growth (UTXO that were prunned, contract slots changes per block or transactions)

This is extreme example where history and every change set is saved in eth archive node and size goes very high: https://etherscan.io/chartsync/chainarchive you can see that increase in database history goes in few terabytes per year.

This is data footprint of eth prunned node where historic blocks are all saved but history change is saved for (i think OE is like last ~100 block while geth has like ~1000 or something, it is configurable not sure what is set in etherscan): https://etherscan.io/chartsync/chaindefault

Current state of art database model in ETH is erigons where they managed to shrink 10tb of history model to 1-2tb: https://github.com/ledgerwatch/erigon/blob/devel/docs/programmers_guide/db_walkthrough.MD This ofcourse is not applicable to us in full form as eth is account based while we are UTXO based, but account based mechanism is used by our contracts storage and some idea are going to be valuable and applicable.

and eth harddrives are all NVMe SSD as it is needed to keep up with speed of read/writes when execution blocks. This is probably going to be mitigated a little bit as we don't have one state root but on other hand our transactions are a lot bigger than ethereum's.

Having write/delete log will do the job of having ability to rewind and do reorg on blocks but we will not have ability to execute contract that are set in past and get debugger logs for example.

Freezing db is new thing here for me, if reorg (re organization of blocks) happens this need to be atomic as until reorg is confirmed we still need to take new block and after it is confirmed our newest best block become invalid and everything will need to pause and revert back to block before invalid one. There is no need to include new blocks to invalid chain head as it has invalid state and our new valid state and new block head is in past.

ControlCplusControlV commented 2 years ago

Are there any storage concerns to keep mind now when writing this? I know keeping state as minimal is necessary for decentralization, but not sure if there is anything specific I should do with this?

Voxelot commented 2 years ago

I know keeping state as minimal is necessary for decentralization, but not sure if there is anything specific I should do with this?

This can be deferred to a follow-up task, but we should have the ability to configure state diff pruning based on the ivg challenge window (e.g. 1-2 weeks), as we don't need to keep these diffs for proving past that point.

Freezing db is new thing here for me

It's likely there are other efficient ways to accomplish this, I'm open to alternate suggestions. Given our current structure, the in-memory overlay will pass through reads to the underlying datastore when retrieving any keys it hasn't encountered from a diff yet. If the underlying datastore gets updated (for any valid reason), this could cause issues for the overlay as it may return an unexpected value if a key-value was changed in the underlying datastore that wasn't part of any diffs. Here's a diagram example:

  +--------------------+  |  +-------------+     +-------------+                                                                                         
  | Rewind View        |  |  |             |     |             |                                                                                         
  |                    |  |  |   Block 1   |     |   Block 2   |                                                                                         
  |                    |  |  |             |     |             |                                                                                         
  | a: 2 (passthrough) |  |  | update a: 1 |     | update a: 2 |                                                                                         
  | b: 1 (passthrough) |  |  | update b: 1 |     |             |                                                                                         
  |                    |  |  |             |     |             |                                                                                         
  +--------------------+  |  +-------------+     +-------------+                                                                                         
                          |                            ^                                                                                                 
                          |                            |                                                                                                 
                          |                       Start Rewind                                                                                           
                          |                                                                                                                              
  +--------------------+  |  +-------------+     +-------------+                                                                                         
  | Rewind View        |  |  |             |     |             |                                                                                         
  |                    |  |  |   Block 1   |     |   Block 2   |                                                                                         
  |                    |  |  |             |     |             |                                                                                         
  | a: 1 (in-memory)   |  |  | update a: 1 |     | update a: 2 |                                                                                         
  | b: 1 (passthrough) |  |  | update b: 1 |     |             |                                                                                         
  |                    |  |  |             |     |             |                                                                                         
  +--------------------+  |  +-------------+     +-------------+                                                                                         
                          |        ^                                                                                                                     
                          |        |                                                                                                                     
                          |   Rewind diff                                                                                                                
                          |                                        New Valid Block Added                                                                 
  +--------------------+  |  +-------------+     +-------------+     +-------------+                                                                     
  | Rewind View        |  |  |             |     |             |     |             |                                                                     
  |                    |  |  |   Block 1   |     |   Block 2   |     |   Block 3   |                                                                     
  |                    |  |  |             |     |             |     |             |                                                                     
  | a: 1 (in-memory)   |  |  | update a: 1 |     | update a: 2 |     | update a: 3 |                                                                     
  | b: 3 (passthrough) |  |  | update b: 1 |     |             |     | update b: 2 |                                                                     
  |                    |  |  |             |     |             |     |             |                                                                     
  +--------------------+  |  +-------------+     +-------------+     +-------------+                                                                     

Because there could be a very large amount of potential keys, we would passthrough to the underlying db for anything that isn't cached in-memory from the diff. However, in this case the rewind view would return 3 instead of 1 for b without some way to avoid passing through to state that comes after the rewind has started.

Another approach could be to suffix all keys by block number, and whenever retrieving older values, we leverage the lexicographic ordering of keys in rocksdb to fetch the latest key before the block number we care about. The benefit is then the rewind op would be O(1) reads from our perspective, only touch the keys it needs to, and we don't have to perform as much blob based serialization of diffs. While it would duplicate every key value in the same table, I don't think it would be much different than storing diff blobs.

rakita commented 2 years ago

Very good diagram!

I was gunning this use case, when we want to revert blocks because a block in history got reverted by IGV and in that case New Valid Block Added if we have started rewind process Valid is not correct, in that case, it is just a new block on bad chain ( branch ) that needs to be discarded with rest of them.

Is this a question about having the ability to re-execute past block to generate IGV if needed? In this case I am not sure if this is needed in side-chain as we will not vote on blocks we dont want to commit to contract, fuel consensus gives us proof the block is okay, and in case of manual override we just need to review old state and first use case comes in play.

"but we will not have ability to execute contract that are set in past and get debugger logs for example." rewind as batches does not allow us to this as you pointed new block can override the db state.

rockdb way of snapshot I feel this could be a trap for us as the whole DB is going to be snapshotted and it would be better to go for a universal path forward, but it can probably work.

How most eth clients do this and have the ability to execute old blocks, is they cross-index the data by block change so you can easily jump to past and have the ability to append new block without disrupting past index. In our case, UTXO are not a problem but contract storage that is changeable and harder to track. I will need some time to extract and spec out how it is done exactly, but before that I will need to finish relayer relayed stuff.

ControlCplusControlV commented 1 year ago

I was planning on building out a branch with state rewind as initially described by @Voxelot using a new DB column to store changes from a previous db snapshot (which would be configurable based on last snapshot so only 7 extra days of data is stored), but it sounds like you have a different design in mind @rakita you want some more time to spec out? Do you want me to continue with the initial design or would you prefer I tackle some other issues while you work out a different design?

rakita commented 1 year ago

The initial design will need to be replaced in future. And we don't have an instant benefit to rush it now, so it is better to do it right from the start. It is best to wait to spec this out, i will need to dig around how it is done in OE, and if we go by erigon design that would possibly mean better data layout (smaller db size) but it needs a rework on how contract storage is saved.

Voxelot commented 1 year ago

One interesting data point about rocksdb snapshots: Every standard iterator in rocksdb uses a snapshot on creation by default, so it's something we already use in many places today implicitly. This is to ensure the iterator results are deterministic and don't change as new data comes in (which is similar to our concern here).

https://github.com/facebook/rocksdb/wiki/Iterator#consistent-view

Appending a block height suffix on all of our keys would provide us with an simple data model to jump around in time. It would also simplify other issues such as storing state diffs of imported unfinalized blocks, as we could easily verify a sequence of unfinalized blocks that build on each other without complicated diff management. The performance downside of this approach that may outweigh this data model, is that every get op would essentially become an iterator. The start cursor would be set to key bytes || block height + 1, and then we'd iterate backwards one step to obtain the most recent version of the key that we are looking for. This would prevent us from being able to use parallel operations like multi-get (which could be used to check the spent status of all utxo ids in parallel etc), and would require a snapshot to fetch any state. The snapshot creation cost could be lowered by taking one snapshot before block execution and reusing it across all the iterators (gets).

rakita commented 1 year ago

One interesting data point about rocksdb snapshots: Every standard iterator in rocksdb uses a snapshot on creation by default, so it's something we already use in many places today implicitly. This is to ensure the iterator results are deterministic and don't change as new data comes in (which is similar to our concern here).

I didnt know about this beforehand, but it lead me to search why it is in that way, and i finaly got it why erigon didn't use rocksdb/levelsdb. This is a good reference: https://github.com/ledgerwatch/erigon/wiki/Choice-of-storage-engine

Good initial read on Serializability: https://vladmihalcea.com/serializability/ It seems that concurrency controls dictate a lot of DB behavior. Concurrency control is needed to guarantee that consistency is preserved when we write and read things to db.

Snapshot isolation that is needed for iteration and when we want to read data, rocksdb seems like it did good job: https://cockroachlabs.com/blog/cockroachdb-on-rocksd/#rocksdb-snapshots

Multiversion concurrency control (MVCC) seems better in general as it gives us better snapshot isolation.

On Tree indexing from same link:

If, on a final exam at a database class, you asked students whether to build a database on a [log-structured merge tree]
(https://en.wikipedia.org/wiki/Log-structured_merge-tree) (LSM) or a [BTree](https://en.wikipedia.org/wiki/B-tree)-based storage 
engine, 90% of your students would probably respond that the decision hinges on your workload. “LSMs are for write-heavy 
workloads and BTrees are for read-heavy workloads”, the conscientious ones would write.

B-tree would be preferable for us as we can expect a lot more read's than writes.

And for this task: Logging full write/delete to db will increase its size more than it should. And will not give us the ability to execute arbitrary block/transaction in time as applying 100_000 block diff would take a lot of time to read from db and apply in memory.

Here is how to get that with storage indexing: https://docs.google.com/spreadsheets/d/12h2GW-9COQPMDJ1byPJzgqBaFKweAkXn8a6OS6YIEB4/edit#gid=0

Voxelot commented 1 year ago

The TikV docs describe their motivations for using RocksDB delve into the LSM vs Btree issue a bit more in depth: https://tikv.github.io/deep-dive-tikv/key-value-engine/B-Tree-vs-Log-Structured-Merge-Tree.html

Tl;dr they concluded it's easier to speed up reads via traditional methods like caching and additional indexing layers on LSM, than it is to improve the write performance of a btree on disk.

Interestingly, TikV also achieved MVCC in RocksDB by using "Prefix Bloom Filters" and suffixing each key with a timestamp. Similar to how I suggested suffixing each key with a block height earlier. They also made some low-level RocksDB customizations to do cleanup on old key versions (which we'd like to do after the 2 week fraud proving window).

As far as serializability, RocksDB guarantees that once data is committed to the WAL, it will be available and consistent. The DB comparison from Erigion saying RocksDB is brittle/unstable due to a power loss sounds a little bit like baloney to me. Jepsen found issues like this while verifying tendermint on LevelDB, but assuming you wait for put's to be flushed to the WAL, this claim seems dubious.

While it's very possible something like LMDB/MDBX could be faster, or at least Erigon/MDBX has proven it's faster than Geth using LevelDB, there are still a lot of unknowns given our unique setup. For example, MDBX heavily uses mmap'ing which could have other negative consequences in highly concurrent environments. RocksDB has the most features and support and should be more than sufficient to deliver a quality product (at the very least we shouldn't be any slower or have any more issues than Substrate, Near or Solana which all use RocksDB). It's not worth the risk to consider switching DB's at this time until we have substantial motivations to beyond MDBX-sponsored rumors and benchmark theatrics.

Currently, we don't make use of the built-in RocksDB transaction API. However, RocksDB has a lot of robust and powerful APIs such as snapshot and AtomicBatchWrite which makes it possible to implement your own custom transactional behaviors (optimistic vs pessimistic) at the application level. Most databases don't provide this kind of flexibility. This allows us to fine-tune our locking behavior based on the context, e.g. we could even do some hybrid scheme of globally pessimistic vs locally optimistic at the contract level.


Back to our original discussion around modeling. The indexing for diff blobs was a missing piece in the original proposal here, and definitely seems like a good idea. Although, it might be worth looking more into the suffix approach that TikV used for MVCC, as it could be radically more space and query time efficient for time traversal than storing and indexing diff blobs.

rakita commented 1 year ago

Oh TikV link seems like great info, thank you for that, will read it later as i feel like I am still missing some knowladge on databases.

I think MDBX has ACIS on file system level, while rocksdb does that in memory or with WAL, so in that side MDBX seems better. As it is written on github it allow's it: Allows a swarm of multi-threaded processes to ACIDly read and update several key-value [maps](https://en.wikipedia.org/wiki/Associative_array) and [multimaps](https://en.wikipedia.org/wiki/Multimap) in a locally-shared database. https://github.com/rouzier/libmdbx

I think choosing RocksDB is a right call, as it is used by a lot of projects as primary database storage. I just wanted to share info about alternatives as it was interesting to me to investigate (scratch a surface) a litle bit.

I created issue #496 that contains all needed changes to support new path that we are taking, as it requires more changes to current columns, and the diff is now specified in more detail.

MitchTurner commented 3 months ago

Is this still relevant?

xgreenx commented 3 months ago

Yeah, we want this feature before mainnet. It would be nice to have it during testnet