Closed rakanalh closed 2 weeks ago
Let's introduce some definitions:
The issue with chunks arises when zip(D) exceeds a certain size of 400kb:
Thus we need to either split zip(D) in chunks or contact a miner so they can include our tx manually.
There at least 4 ways how chunks can be implemented.
1.
if zip(D) < 400kb, commit(zip(D)) + reveal(zip(D)). // This is how we publish DaData now.
if zip(D) >= 400kb, for chunk in chunks..N: commit(chunk) + reveal(chunk)
It requires modifications of Pg insert_sequencer_commitment - it needs to support multiple txs - chunks.
for chunk in chunks..N: commit(chunk) + reveal(chunk)
It requires modifications of Pg insert_sequencer_commitment - it needs to support multiple txs - chunks.
for chunk in chunks..N: commit(chunk)
reveal_tx_ids: [ for chunk in chunks..N: reveal(chunk) ]
commit(rollup name, pubk, signature, reveal_tx_ids)
reveal_tx_id = reveal(reveal_tx_prefix, reveal_tx_ids)
This does not require DA versioning, because even if we were in mainnet we could have implemented a switch to a new tx via bitcoin block height check.
4.
if zip(D) < 400kb, commit(zip(D)) + reveal(zip(D)). // This is how we publish DaData now.
if zip(D) >= 400kb, do as in case 3.
This case be cheaper in if zip(D) < 400kb. (we will need 2 txs instead of 3 txs (1 chunk) ).
Case 4) requires DA versioning because we need to separate reveal aggregated tx from legacy reveal tx.
Cases 1), 2), 4) require DaData versioning because we need to build DaData::SequencerCommitment from chunks.
Case 3 does not require neither DA versioning nor DaData versioning because we know for sure that there is only reveal aggregated tx and we always need to build DaData::SequencerCommitment from chunks even if chunks len = 1.
we should go with 4.
2 and 3 sound reasonable and simple to implement, however, for the batch proofs it will add a lot of unnecessary cycles + it will raise our da costs.
case 1 and 2 is not working -- it would need a tx to describe the chunks and order of the chunks. which added, is basically case 4.
TBD