Closed wenmoonsir closed 3 years ago
Hello @wenmoonsir
The merkle trees we use are binary trees. Which, if balanced, have a height that is log2(n)
with n
the number of leafs. This means that even with 1 million nodes, the proofs would only be 20 entries long. 1 billion would be 30 entries. 1 trillion would be 40 entries.
As long as the software used to produce the merkle tree produce balanced trees, there is no real issue with proof length being to long (they should never be stored, and having that in calldata is not an issue.
To add some numbers to this, the cost of each nonzero calldata byte is 16 gas. Let's say for 1 million nodes, a proof is 20 32-byte words, a total of 640 bytes, for a total of about 10k gas (there is some extra overhead due to encoding the array for the proof but it's negligible). At current prices (130 gwei / gas, 4110 USD / ETH) we're talking about 5.5 USD cost for sending the proof in calldata.
There would be some additional cost for verifying the proof, and we should measure that, but calldata cost probably dominates at these proof sizes.
In summary: the limit on tree size will be given by the gas limit of a block (and potentially total calldata limit as is being discussed for a future hard fork), but proof sizes grow logarithmically so we think in practice proof size is not a concern.
@frangio that's an interesting point. Thanks for bringing it up! I use another chain which costs way less than erc20 but same idea applies.
Cool. :) Was worried about the size but all seem good
Hi, I'm using MerkleProof module to verify if a node belonging to the merkle tree. It's just the standard one. I couldn't find any docs or issues related to the size of the tree. So I would like to ask if we have done some benchmark on this. For example, 1M nodes max or something like that.
In MerkleProof.verify, it takes in proof which is bytes32[]. Any size limit on this one, potentially?
Thank you!