near / nearcore

Reference client for NEAR Protocol
https://near.org
GNU General Public License v3.0
2.34k stars 632 forks source link

Worst-case witness size for stateless validation #9378

Open jakmeier opened 1 year ago

jakmeier commented 1 year ago

Goals

Background

If we want to move to stateless validation, we need to understand the witness sizes involved. This issue is about finding out hard limits that are never exceeded even when maliciously crafted transactions are sent in a perfectly corrupted order.

Why should NEAR One work on this

In the world of stateless validation, understanding and limiting state witness size is the key as time to deliver a state witness over the network is critical part of stateless validation. We need to make sure state witness stays within reasonable limit so block processing time stays similar to what we have now and does not overload validator nodes.

One known problem is that a contract could be 4MB in size but a call to it might only consume 2.5 Tgas in execution cost. In other words, we can fit 400 x 4 MB = 1.6GB of WASM contract code in a single chunk. Adding a gas cost that scales with the real contract code size would be a way to limit this.

What needs to be accomplished

Task list

### Tasks
- [x] Size of action receipts
- [ ] Size of data receipts
- [x] State entries accessed per action
- [x] Trie nodes accessed per action
- [x] Max total state accessed per chunk
jakmeier commented 1 year ago

It looks like I will need to split this into several stages. Data receipts can become quite tricky to figure out in which witnesses they belong inside, or how many of them can clump up at one place. I will write more details about them in a future comment.

For now, let me list for action receipts how large they can become based on gas and other runtime limits. This is not considering the state accessed, or even the output logs produced.

Network Message Sizes

  Network MB Output / 1Pgas Network MB Input / 1Pgas Main Parameter
  [MB] [MB]
empty action receipt 0.60 0.60
CreateAccount 0.00 0.00
DeployContract 145.83 15.48 action_deploy_contract_per_byte.send
FunctionCall 358.57 358.57 action_function_call_per_byte
Transfer 0.15 0.15
Stake 0.58 0.80
AddKey 20.44 20.44 action_function_call_per_byte
DeleteKey 0.69 0.69
DeleteAccount 0.47 0.47
Delegate (empty) 1.44 1.44

The table above shows that function call actions can become the largest per gas unit. A chunk filled with function calls could be up to 358MB large, just for storing the actions. This is true today, before we even start talking about changes that would be necessary for stateless validation.

jakmeier commented 1 year ago

The data dependency rabbit hole looks a bit too deep to reach a conclusion just yet. I will ignore them for now and just look at other boundaries we can figure out.

jakmeier commented 1 year ago

Accessed state size per action

To understand how much state an action receipt may access in the worst case, we have to consider two parts. 1) How many trie nodes are necessary to include in the witness 2) How much actual data is accessed

Values

Observations:

Number of trie nodes accessed

Observations:

Results

I've done all the analysis in the spreadsheet I also used for calculating how large receipts themselves are. It should be publicly readable: https://docs.google.com/spreadsheets/d/1m66nacHFbvM0rogeS_lV8uJanhbvQKEF0caY93jMOpg/edit#gid=200844971

Here is the summary. Note that the first row (action receipt) is always necessary, so the touched trie nodes for a account creation transaction will be action_receipt + CreateAccount.

  TTN / PGas Trie Values Bytes / Pgas
  [TTN] [MB]
empty action receipt 3'629'630  *
     
CreateAccount 0 0.00
DeployContract 702'703 145.83
FunctionCall 36'106'032 183.54
Transfer 0 0.00
Stake 0 0.00
AddKey 2'475'185 20.44
DeleteKey 2'757'895 22.78
DeleteAccount 0 0.00
Delegate (empty) 1'310'000 10.82

* empty action receipt needs to access data receipts, these calculations are still ongoing

Conclusion

jakmeier commented 1 year ago

Suggestion to enforce ~45MB witness size per chunk

Here is an idea how to limit the worst-case sizes.

We could add additional chunk space limits. Today, it's already limited by gas, by compute costs (in case of known-to-be-undercharged parameters), and by the size of transactions. We can add more conditions.

Specifically, I would like to add a limit for:

For this to work, we need good congestion control. If the delayed receipts queue gets too long, this itself could blow up the witness size. Let's say we can keep the queue length per shard below 10MB, then we have used 3 * 10MB = 30MB of witness size so far. The remaining 15MB come from assumption around how we enforce the limit.

Note on enforcing (soft-)limits

To avoid re-execution, we might want to make these soft limits. (The last applied receipt is allowed to go beyond, we just don't execute another one.) That's how we are currently handling all existing limits. But that means the real limit is the soft limit + the largest possible size of single receipt application. If we do nothing, this completely blows up the limits back to the 100s of MBs territory.

To limit the effect of the last receipt that goes above the soft limit, I suggest we also add per-receipt limits to network bandwidth and witness size. Going above the limit will result in an error, which is observable by users as a new failure mode. But the hypothesis is that we can set the limit high enough that normal usage will not hit them.

Ballpark numbers:

Hard limit?

The obvious alternative would be to just make these hard limits. As in, the receipt that breaches the limit is reverted and will only be applied in the next chunk.

In this design, the per-receipt limit would be the same as the per chunk limit. Single receipts that go above the chunk limit will never execute successfully, so we will also need to make them explicitly fail which again introduces a new failure mode for receipt execution. Still, enforcing hard limits could bring down the worst-case witness size from ~45MB to ~30MB if we just take all the suggested numbers used so far.

walnut-the-cat commented 1 year ago

For this to work, we need good congestion control.

@jakmeier , Is this already handled with local congestion control? Or are you suggesting us to do global congestion control as well?

walnut-the-cat commented 1 year ago

Sounds like this needs NEP?

jakmeier commented 1 year ago

@walnut-the-cat

walnut-the-cat commented 1 year ago

@robin-near, when we are loading trie in memory, how much do we have to pay attention on TTN? Can we completely get rid of it? In other words, can we cross out the following statement by Jakob?

Trie nodes likely need explicit limits:

  • Function calls can currently access a lot of trie nodes (more than 36 million nodes in a single chunk!).
  • The next biggest offender is simply the overhead for empty receipts. This can also lead to 3M nodes in a chunk.
robin-near commented 1 year ago

Trie node access would be pretty cheap, yeah. But writes are not completely free (coz they gotta go to disk), so probably keep that.

Not sure about the empty receipts part - what does that mean?

jakmeier commented 1 year ago

Trie node access would be pretty cheap, yeah. But writes are not completely free (coz they gotta go to disk), so probably keep that.

Access may be cheap but this issue here is specifically about witness size in the worst case. (hard limits we can proof, not practical usage patterns) Even if it's served from memory, you still have to upload and download over the network, potentially archive it and so on.

Not sure about the empty receipts part - what does that mean?

With stateless validation, my assumption here is/was that each trie node involved in the state transition needs to be in the witness. (Assuming pre-ZK implementation)

An empty receipt, as in a receipt with no actions, is still a receipt that is stored and loaded from the trie. But because it's empty, it's particularly cheap in terms of gas costs. In other words, many empty receipts may be included in a single chunk. According to my table above, you can access 3 million trie nodes with just 1000 Tgas if it's filled with empty receipts. This seems like a problem for state witness size. Because to prove that a receipt is indeed part of a current trie, you have to include all those trie nodes in the witness for a single chunk.

Does that make sense?

robin-near commented 1 year ago

Oh sorry I was answering Yoon's question without looking at the context. For the greater state witness size issue I don't have any useful comment at this moment.

jancionear commented 7 months ago
  TTN / PGas Trie Values Bytes / Pgas
  [TTN] [MB]
FunctionCall 36'106'032 183.54

Trie nodes likely need explicit limits

  • Function calls can currently access a lot of trie nodes (more than 36 million nodes in a single chunk!).
  • (for context: a trie node with 16 children is 512B large, so if a witness should be limited to 10MB we can only fit around 20000 nodes, definitely not millions of them)

AFAIU the cost of touching a single Trie node is wasm_touching_trie_node: 16_101_955_926. That means that in one PGas we can touch 1e15 / 16_101_955_926 = 62104 nodes, not 36 million 0_o

I'm currently trying to construct a contract that generates the largest storage proof possible, and for now I think the most efficient option is to read large values. Cost of reading one value byte is only wasm_storage_read_value_byte: 5_611_005, so we can read 1e15 / 5_611_005 = 178221192 = 178MB of values in one PGas. Reading 62104 trie nodes, 512 bytes each, would give us only 31MB of storage proof.

So far I was able to construct a receipt which generates 24MB of storage proof by reading large values. It's only half of the theoretical maxium, but I guess other costs get in the way.

jakmeier commented 7 months ago

AFAIU the cost of touching a single Trie node is wasm_touching_trie_node: 16_101_955_926. That means that in one PGas we can touch 1e15 / 16_101_955_926 = 62104 nodes, not 36 million 0_o

@jancionear Yes, you are correct for cases where we actually charge wasm_touching_trie_node or TTN cost for short. However, if you look at the linked source document above the table (this one) you see that the computation looks at how many trie nodes can be touched using has_key(). Since the introduction of flat state, reading operations do not charge TTN cost because we don't even know how many tries there are on the path if we only look it up in flat state.

But I am not up to speed with all stateless validation changes. If you reintroduce the TTN cost for state reads as part of stateless validation, indeed that changes the calculations completely.

jancionear commented 7 months ago

But I am not up to speed with all stateless validation changes. If you reintroduce the TTN cost for state reads as part of stateless validation, indeed that changes the calculations completely.

With stateless validation we always walk down the trie, even when using flat storage, because we have to record the touched nodes and place them in the storage proof:

fn lookup_from_flat_storage(
        &self,
        key: &[u8],
    ) -> Result<Option<OptimizedValueRef>, StorageError> {
        let flat_storage_chunk_view = self.flat_storage_chunk_view.as_ref().unwrap();
        let value = flat_storage_chunk_view.get_value(key)?;
        if self.recorder.is_some() {
            // If recording, we need to look up in the trie as well to record the trie nodes,
            // as they are needed to prove the value. Also, it's important that this lookup
            // is done even if the key was not found, because intermediate trie nodes may be
            // needed to prove the non-existence of the key.
            let value_ref_from_trie =
                self.lookup_from_state_column(NibbleSlice::new(key), false)?;
            debug_assert_eq!(
                &value_ref_from_trie,
                &value.as_ref().map(|value| value.to_value_ref())
            );
        }
        Ok(value.map(OptimizedValueRef::from_flat_value))
    }

I thought that because of this we also charge for all TTN on the read path, but now I see that we actually don't charge gas for the trie read, charge_gas_for_trie_node_access is set to false :O. Alex said that we don't charge gas there to not risk breaking existing contracts.

Now the math much more sense to me, if we don't charge for TTN then there could be severe undercharging. :+1: :+1: