Closed cmwaters closed 1 month ago
Does this require a CIP (It is technically a consensus breaking rule)?
What would we want to set this size to? 1MB? 2MB?
yeah, this would require at least 1 CIP, but as touched on above, the reasoning for the two is quite distinct so we might want to do 2 separate things.
The cap on tx size is difficult, as users have indicated that larger blobs can help things like "super blobs". The protocol also might not need such a limit until blocks are increased to meaningfully larger than > 4MB. Configurations and expectations could also just need to be changed for larger txs. Larger txs take longer to gossip, therefore users should not expect them to be included within the typical 2 block period. For the same reason, proposers using any flavor of compact blocks change their calculations for inclusion of large txs. If that's the case, then maybe we don't need a limit.
The cap on the number of txs seems much less difficult to find an effective number and to see the urgency. At least for the medium term. This might be needed sooner rather than later as well, considering 5000 txs can already fit in a 2MB block.
The cap on the number of txs seems much less difficult to find an effective number and to see the urgency. At least for the medium term. This might be needed sooner rather than later as well, considering 5000 txs can already fit in a 2MB block.
But if the problem here is with execution costs, then don't we want to go about this by setting a max gas?
The protocol also might not need such a limit until blocks are increased to meaningfully larger than > 4MB
That could be the case, but that's on the horizon. Ideally we add the cap before we increase such that it has less impact on current users
setting a max gas?
by this, are you referring to setting the max gas value to something other than -1? adjusting resource pricing is a great alternative to simply hardcoding the max number of txs! That seems like a bit more complex of a problem if we continue to price blobs in the same unit as doing things with state.
by this, are you referring to setting the max gas value to something other than -1?
Yup, I think this would be better if we can calculate a reasonable upper bound
after a sync discussion, we can first have a soft cap via overriding the mempool config in a patch release, and then a consensus critical hardcoded parameter pending a CIP.
we can first have a soft cap via overriding the mempool config in a patch release
A soft cap on the number of transactions in the mempool or the size of an individual transaction?
Clarified during a call: the soft cap is proposed for the size of an individual transaction:
[mempool]
max_tx_bytes = 7897088
and not for the number of transactions in the mempool.
Strong approve of this proposal (both the max transaction size and the max number of transactions or just in general max resource consumption of the Celestia state machine).
I'd suggest a maximum number of bytes for a single transaction to be limited to the current ~2 MB, and increasing it only after carefully studying the effects of larger transaction sizes, since the current benchmarking etc. is more focused on larger block sizes. The max transaction size can be increased later as needed, but setting it to ~2 MB to start at least users can continue submitting any blob they could submit today.
I'm not sure a tx size limit needs to be consensus critical, perhaps it can just be a mempool param.
Like if a validator makes a block with a 32mb tx they received out of band, I don't think that causes any issues
EDIT: I have split this out into two different issues. You can find the discussion on limiting the amount of computation here: https://github.com/celestiaorg/celestia-app/issues/3820
I'll write a CIP for this. @rootulp @cmwaters @evan-forbes
Can't it just be a mempool parameter, rather than consensus critical?
for posterity, cross posting this here
we know we want to do https://github.com/celestiaorg/celestia-core/issues/1446 . fortunately the difference in terms of implementation is very small, so we can move forward with implementing a version constant for max tx size, and then make it a tx validity rule after.
The reason to be defensive here is because once users have the ability to post blobs > 2MB, then there's no going back as that could break users.
Without the ability to chunk, then an 8MB tx would take longer to propagate. How much longer depends on many factors. The existing block prop inherently chunks block data, and this would not have an effect other than the tx taking longer to be picked up by a validator. the effect of an 8MB vs 2MB tx on compact blocks is more complex. Same goes for vacuum. this means that we'd have to implement chunking for both, which complicates the initial implementations.
The main issue mentioned by John was the potential for spam on the p2p layer. we are unable to cut off a peer until we download the entire 8MB message. fortunately, fixing this does not require a tx validity rule, only a versioned constant
to summarize: we should be able to handle 8MB txs, but then need to add chunking to compact blocks and vacuum. we want a versioned constant no matter what.
I'm supportive of simply adding the versioned constant at 2MB in the mempool, with a plan to increase it to 8MB after chunking is added to vacuum or compact blocks. Users can still get an 8MB tx included in a block if they submit directly to a validator and wait for them to produce a block. If this occurs, then users clearly want 8MB blocks, and we need to add chunking to vacuum and compact blocks to support those users.
Can't it just be a mempool parameter, rather than consensus critical?
I'm wondering the same thing. Earlier in this thread, we discussed addressing this issue via the existing mempool config:
[mempool]
max_tx_bytes = 7897088
Why does https://github.com/celestiaorg/celestia-app/pull/3909 introduce a versioned constant and a tx validity rule instead of relying on the mempool parameter? Perhaps this can be clarified in the CIP.
@rootulp there's a few motivations, but non-exhaustively: (if people start using large blobs then the max blob size can't be decreased) if throughput is the bottleneck, and having large blobs also requires large squares, then that could actually prevent decreasing block size and block time at the same time while keeping throughput the same. For this and other unknown unknown cases, it's better to be safe than sorry otherwise we might be painting ourselves into a corner.
I understand that we don't want to claim support for 8 MiB blobs now because we may need to decrease that later. I was wondering why we chose to limit tx size via a consensus breaking change instead of changing the default mempool MaxTxBytes
. IMO if we change the default max tx bytes to 2 MiB (and ask validator's to configure that locally) then I expect most blob submitters to continue submitting blobs < 2 MiB because blobs larger than that will be rejected by most nodes' mempools.
Summary
Similar to how Celestia has constraints on block and square size to ensure that the network does not fall over under excess demand, we should also consider adding constraints to the max tx size and max number of transactions.
Currently we do have an antehandler that sets the max tx size to the size of the current max block (note that nodes can locally set a smaller max tx size for their mempool). However, as the block jumps to 8MB, having to gossip an 8MB transaction without any chunking could be detrimental to the network. As a precaution we should set a max size that is no longer equal to the block size. This also makes it easier for submitters to know how large they should batch their transactions.