Closed jaekwon closed 5 years ago
Simplest thing is maybe to cap the evidence at the size of the validator set as a reasonable heuristic ?
Also if any of the evidence happens to be invalid, we'll error - so the attack would have to be able to come up with lots of valid evidence.
Longer term it'd be cool if evidence came with light-client proofs
The proposer/mempool side of this was done in #2184. Still need to enforce this in the consensus!
In #2184 its capped by at evidence is 1/10
th of block size at max. I think it would be better to cap by a maximum number of evidences. (I think evidence should be constant size right?) It feels weird to always allocate so much space for it when the expectation is for there to be 0 in almost every block. I think we only need to support 2 or 3 evidence reports by default. (This being parameterized) Supporting more when we don't need them is just increasing the potential DOS vector. (Plus due to network delay already being possible, its not a big deal if evidence gets in a block later)
Should we just fix the cap here to something like 2 or 3 or 5? Or should we make it a fraction of the validator set size? Maybe 5% of the validator set size?
Note if there's many malicious validators, one could spam us with evidence from themselves and potentially prevent evidence from the other(s) getting in. So it seems like really max evidence should be a function of the validator set size and the evidence age so we can ensure we can process all the evidence in time even if we're getting spammed.
I'm going to add a rule to enforce the current cap (1/10 block size) in the consensus, and then we can relax the amount of space we're using in a future PR.
Note if there's many malicious validators, one could spam us with evidence from themselves and potentially prevent evidence from the other(s) getting in. So it seems like really max evidence should be a function of the validator set size and the evidence age so we can ensure we can process all the evidence in time even if we're getting spammed.
This was basically the reason behind the CheckEvidence discussion, we'd have the app mark these other evidences as bad, and the block wouldn't pass precommit. (e.g. double signs at different blocks) I'm not sure what the final resolution was. With that in mind, a cap of size 2 seems good to me. (I don't think it needs to scale in the default implementation with number of validators)
Also its not a big deal if it takes a couple of blocks for evidence to get in, so we can wait for it to reach an honest validator. To mitigate the problem of a malicious validator making a double sign for block n, then n+1, n+2, and getting these evidences in sequence to an honest validator, we have the evidence pool do CheckEvidence on a random shuffling of the evidence pool, and then remove any that are bad.
Can we avoid making new ABCI message by doing something like https://github.com/tendermint/tendermint/issues/2561?
Totally agree we can avoid adding the new ABCI message, but that involves introducing the unbonding period concept to tendermint. Its not a hard change, so I'm not really opposed, but that is worth noting. I still think we should cap the size at a fixed constant, not a variable depending on the validator set size.
introducing the unbonding period concept to tendermint
It's already there! ConsensusParams.EvidenceParams.MaxAge. Need to change it to time: https://github.com/tendermint/tendermint/issues/2565
Closed in #2560. Follow up to limit by absolute number instead of relative size in https://github.com/tendermint/tendermint/issues/2590
Each evidence element requires loading the validator set from some recent past. (see state/validation.go) This could be a DDoS vector for block proposals, if we're not careful.