mimblewimble / grin

Minimal implementation of the Mimblewimble protocol.
https://grin.mw/
Apache License 2.0
5.04k stars 991 forks source link

Fix aggregation of very large transactions #1327

Closed ignopeverell closed 6 years ago

ignopeverell commented 6 years ago

One could make a transaction that's just below our maximum size limits and send it over a Dandelion relay. As soon as it gets aggregated with another transaction, the resulting transaction would be impossible to mine.

antiochp commented 6 years ago

So one issue here is we do not actually validate the resulting tx after aggregating txs together. We verify the kernel sums correctly but that is the only validation check we perform.

https://github.com/mimblewimble/grin/blob/328d832bd6e3ac33cf4c160c4d78085dafb20adf/core/src/core/transaction.rs#L576-L580

It should be straightforward to validate the tx (in isolation) at this point.

The other related issue (as discussed briefly on gitter) is that we have a discrepancy between the tx validation rules and the rules for reading/writing txs over the wire.

Tx validation checks that we have less than consensus::MAX_BLOCK_INPUTS inputs - https://github.com/mimblewimble/grin/blob/328d832bd6e3ac33cf4c160c4d78085dafb20adf/core/src/core/transaction.rs#L434-L437

But transaction read() checks that we have less than consensus::MAX_TX_INPUTS inputs (among other checks) -

https://github.com/mimblewimble/grin/blob/328d832bd6e3ac33cf4c160c4d78085dafb20adf/core/src/core/transaction.rs#L302-L307

And these are significantly different values -

https://github.com/mimblewimble/grin/blob/328d832bd6e3ac33cf4c160c4d78085dafb20adf/core/src/consensus.rs#L91-L96

antiochp commented 6 years ago

I think we should -

a) add a post-aggregation tx validation step to fully validate the resulting tx.

b) consolidate the input|output|kernel count validation rules in tx.validate()

c) increase the input limit to match the block input limit. I think it makes sense to allow a huge tx that would end up in a blocks on its own, effectively filling the block entirely. This way we make aggregation as effective as possible at the tx pool level. @tromp? This may be trickier for limiting outputs as a block will contain at least one additional output (for the coinbase, and maybe multiple of these). Similar for tx kernel limit.

d) move the related count rules out of tx.read() and call tx.validate() as part of tx.read() so we can reuse the existing validation rules without needing to duplicate them explicitly in tx.read().

I can take this on if the proposed approach makes sense.

tromp commented 6 years ago

good point about tx limits needing to be a little below block limits to allow for coinbase. but in principle we should allow for aggregation to reach that level.

i do think aggregation might have to be limited by fee compatibility. miners can set their own policy with respect to the minimum fees they find acceptable for including a tx. so it's not fair to aggregate two tx that pay wildly different fees. perhaps the normalized (by tx weight) fees should be less than a factor of 2 apart for tx to be aggregated.

antiochp commented 6 years ago

Maybe at the block level we have two limits - one for non-coinbase outputs and one for coinbase outputs? That way we can more robustly ensure tx aggregation will remain within the block level non-coinbase limits.

Say for example a block can have up to 300_000 non-coinbase outputs. And an individual tx can also have up to a corresponding 300_000 outputs. At 300_000 no more aggregation could be safely performed. A block could contain this and only this tx and stay within the output limits. (No idea if 300_000 makes sense in this context, just an example value, but that's the current block input limit).

antiochp commented 6 years ago

Fee compatibility rules for tx aggregation I think can be tackled separately. Not sure if we've already captured this as an issue or not?

ignopeverell commented 6 years ago

I agree with the @antiochp proposal. Regarding fees, ordering txs by fee for mining would be a good start...

antiochp commented 6 years ago

This is resolved now.

Edit: Except we discovered there is no weight limit check on transactions - see #1341. So we should fix that before closing this issue.

antiochp commented 6 years ago

Resolved.