Open eskimor opened 5 months ago
The only user of this right now is Snowbridge.
Make the merkle tree cheaply updatable and store it in state. Update only branches for parachains actually advancing.
this should work
Alternatively ... use state proofs directly and don't bother updating this at all?
State proofs are too expensive to verify on Ethereum :)
@ordian ... we will need a fix here, otherwise snowbridge will cease to work with chains moving to Coretime 😬
A potential intermediate fix:
Define a constant like this:
/// This is intermediate "fix" for this issue: https://github.com/orgs/paritytech/projects/119/views/20?pane=issue&itemId=66568838
///
/// It does not actually fix it, but makes the worst case better. Without that limit someone could completely DoS the
/// relay chain by registering a ridiculously high amount of paras. With this limit the same attack could lead to some
/// parachains ceasing to being able to communicate via snowbridge.
const MAX_PARA_HEADERS_IN_PROOF: usize = 512;
and use it in benchmarking here and change the code here to something like this:
let mut para_heads = parachains_paras::Heads::<Runtime>::iter().take(MAX_PARA_HEADERS_IN_PROOF).map(|(id, head)| (id.into(), head.0));
//... sort and building of tree.
Cc @vgeddes @alistair-singh
ethereum side parachain proof contracts will likely need adapting depending on the chosen solution
1. Make the merkle tree cheaply updatable and store it in state. 2. Don't re-compute the entire tree each block, but instead update it only for parachains actually advancing their head data.
@Lederstrumpf and I discussed this exact thing when discussing using this tree in the MMD XCMP design and in the XCMP forum post write up we discuss how it isnt currently update-able.
There is a caveat that the current
BinaryMerkleTree
implementation in Substrate is not updateable at the moment
found Here
Nice find Rob. The best solution is making this thing update-able. Currently the binary_merkle_tree
crate is honestly extremely simple in its features and the implementation is well.. meh could use some work.
A potential intermediate fix:
Define a constant like this:
/// This is intermediate "fix" for this issue: https://github.com/orgs/paritytech/projects/119/views/20?pane=issue&itemId=66568838 /// /// It does not actually fix it, but makes the worst case better. Without that limit someone could completely DoS the /// relay chain by registering a ridiculously high amount of paras. With this limit the same attack could lead to some /// parachains ceasing to being able to communicate via snowbridge. const MAX_PARA_HEADERS_IN_PROOF: usize = 512;
and use it in benchmarking here and change the code here to something like this:
let mut para_heads = parachains_paras::Heads::<Runtime>::iter().take(MAX_PARA_HEADERS_IN_PROOF).map(|(id, head)| (id.into(), head.0)); //... sort and building of tree.
Please take a look at https://github.com/paritytech/polkadot-sdk/pull/4751 which implements the suggested intermediate fix and the open questions.
The workaround in #4751 should work for Snowbridge at least. Have reviewed it. Note that we really only need to verify BridgeHub headers.
If in future we upgrade the "intermediate fix", it needs to be done in a phased manner, to allow us to upgrade our Ethereum contracts without breaking the bridge:
cc @seunlanlege not sure how this impacts your bridge.
Yeah state proofs are impossible to verify on ethereum. Especially because of the blake2b hash function.
I think it's fine to place a bound on the number of parachain headers. Ideally deregistered parachains should have their head data pruned from the latest storage entry so it doesn't grow too big.
We only collect head data for chains which are registered as legacy parachains here.
This could be easily fixed by iterating the headers directly. The issue is the number of registered parachains could potentially grow very large. It is economically bounded by currently rather high deposits, but there is already discussions ongoing about reducing it.
We we will build this merkle tree on each and every block so it should stay not only bounded but also rather cheap.
I think the best way to fix this is the following:
Alternatively ... use state proofs directly and don't bother updating this at all?