Closed nalepae closed 1 year ago
Maybe the finger slipped?
What rule this block proposal does not comply with?
It's rule #2:
Refuse to sign any block with slot <= min(b.slot for b in data.signed_blocks if b.pubkey == proposer_pubkey), except if it is a repeat signing as determined by the signing_root.
There is a slot 30 block in the first interchange that's imported, so the block at slot 29 should not be signed:
Thanks for you answer!
There is a slot 30 block in the first interchange that's imported,
It's not what I understand actually.
Fur pubkey 0xa99a76ed7796f7be22d5b7e...
steps[0].interchange.data
), we import only one block, at slot 0.steps[0].blocks
, we process 3 blocks: At slots 10, 20 & 30.The slot 30 block does not come from the first interchange that's imported, but from blocks processed after importing the first interchange. (But, in fine, it does not change the issue IMO.)
==> After only importing steps[0].interchange.data
, we already know that, for that pubkey, min(b.slot for b in data.signed_blocks) == 0
So, when we try to process block at slot 29 (steps[1].blocks[0]
), we definitly have 29 > min(b.slot for b in data.signed_blocks)
(since min(b.slot for b in data.signed_blocks) == 0
) for this pubkey.
==> Importing block at slot 29
does no brake the rule #2:
Refuse to sign any block with slot <= min(b.slot for b in data.signed_blocks if b.pubkey == proposer_pubkey), except if it is a repeat signing as determined by the signing_root.
Or there is something else I don't get?
Then, in steps[0].blocks, we process 3 blocks: At slots 10, 20 & 30.
Ah yeah, sorry, I missed that.
We are slightly outside the domain of the EIP-3076 spec here, because it only specifies how to handle signed messages with respect to an interchange file. It is arguably valid to use either of the following strategies when signing extra blocks after importing a file:
The test case as currently written assumes Minimal while I guess your implementation assumes Complete. I tried to make the spec & tests compatible with both strategies, which is why the test README includes this paragraph:
Clients using a slashing protection mechanism that admits false positives may consider a rejection as success even if
should_succeed
is true.
I think this is probably the semantics we should rely on here: set should_succeed: true
for the block at slot 29 so that the Complete impls can sign it, and then allow the Minimal impls to reject it. The test generator lives inside Lighthouse, which previously used a Complete strategy but switched to Minimal. I suspect that test case was added after we switched to minimal, so the discrepancy was never spotted. The relevant test is here:
As far as I know most other implementations in-the-wild use minimal, because it is simpler than the complete strategy, more space efficient, and has virtually no downsides (it is rarely useful to be able to go back and sign an old message between ones you've already signed).
I'll have a go at re-working multiple_interchanges_single_validator_multiple_blocks_out_of_order
and multiple_interchanges_single_validator_single_message_gap
, and generate some new tests for you to run. If you can confirm they work for you then we can cut a release and ensure they pass on the other clients too. I suspect some re-working of the test runners in the other clients might be required, as it seems we were assuming properties of the Minimal strategy.
The test case as currently written assumes Minimal while I guess your implementation assumes Complete.
Oh it is exactly the source of the confusion!
To give you a little bit more of context:
29
of multiple_interchanges_single_validator_multiple_blocks_out_of_order
, precisely because the test assumes a minimal DB is used, while Prysm uses a complete DB.Clients using a slashing protection mechanism that admits false positives may consider a rejection as success even if should_succeed is true.
In this specific case, we are in the opposite state: should_success
is actually set to false.
If you can confirm they work for you then we can cut a release and ensure they pass on the other clients too.
Sure with pleasure! Or maybe an other solution is to have: Idea 1:
Or an other idea would be to have 2 should_succeed
fields:
Idea 2.1:
...
"signing_root": "0x0000000000000000000000000000000000000000000000000000000000000000",
"should_succeed_minimal": true
"should_succeed_complete": false
The downside of Idea 2.1
is that it will brake the current compatibility
Idea 2.2:
...
"signing_root": "0x0000000000000000000000000000000000000000000000000000000000000000",
"should_succeed": true
"should_succeed_complete": false
So:
should_succeed
is implicitly for minimal DB.should_succeed_complete
is for complete DB.==> Idea 2.2
does not break the current compatibility.
Actually the more I'm thinking about it, the more I guess tests should really rely on a net an clear should_succeed
state, without having to _may consider a rejection as success even if shouldsucceed is true.
I guess Idea 2.2
is a good compromise between back-compatibility and distinction between complete and minimal anti-slashing DB.
Implementation of Idea 2.2
: https://github.com/sigp/lighthouse/pull/4958
In test
multiple_interchanges_single_validator_multiple_blocks_out_of_order
, the labelshould_success
of the last block is set tofalse
, as we can see here.What rule this block proposal does not comply with?
I suspect
should be replaced by
If we have a consensus, I can propose a PR with this file.