Closed mkalinin closed 5 months ago
How does optimistic sync work, if the ExecutionPayload no longer contains the deposit / withdrawal requests? Or would they simply be in a separate data structure?
Requests become a part of the BeaconBlockBody
so the CL will still be able to process them during the optimistic sync phase.
Is it still possible to prove inclusion of a deposit / withdrawal request on the EL, by checking the generated receipt against the receipts_root?
For deposit requests it would be possible, but withdrawal requests aren’t emitting any events so they can’t be proved using the receipts_root
as per the current design. Could you please provide the case when proving against EL's receipts_root
is important? For instance, in the light clients they can be proved against beacon block root.
Is the additional MPT in the EL block header still required with this?
It is not required as per this proposal requests aren’t included into the EL block.
Use case for proving against receipts_root
: https://notes.ethereum.org/@vbuterin/parallel_post_state_roots
Using a single mechanism would simplify such consumers, so maybe emitting an event as a log should be considered, similar to how there is a log for deposits
Using a single mechanism would simplify such consumers, so maybe emitting an event as a log should be considered, similar to how there is a log for deposits
It sounds to me that emitting an event for each request is orthogonal to this proposal as currently there are no withdrawal request events, and the only way to prove them against the EL block structure is to use an MPT commitment to the list of requests in each EL block. I believe staking pools can build proofs linked to beacon block roots if this is needed. It is a good question if such information can be useful to JSON-RPC consumers as actually exits are happening today (CL triggered though) and JSON-RPC does not provide the access to this data.
JSON-RPC provides withdrawals (as part of eth_getBlockByNumber
/ eth_getBlockByHash
), and has a withdrawals_root commitment to them in the block header (an MPT one, to be converted to SSZ with EIP-6465).
Agree that the proof itself does not have to be solved today. But JSON-RPC should expose the data in some form that is theoretically provable against on-chain commitments. A log serves that purpose.
You are right that for purely CL triggered events, that they are not available directly via JSON-RPC. As they are CL originated, though, they can be checked via EIP-4788. However, here, user triggered withdrawals are EL originated, and are only available in the subsequent block via EIP-4788 (as that one points to the previous slot). So it's not ideal to have certain EL data that requires pulling from an entirely different API.
Also interested in exploring other solutions besides logs. It just seems the most intuitive one, given how deposits work, and also how EIP-4788 works via a system contract so that EVM code can also access it.
Requests become a part of the BeaconBlockBody so the CL will still be able to process them during the optimistic sync phase.
How do you ensure correctness of the deposit / withdrawal requests during optimistic sync?
How do you ensure correctness of the deposit / withdrawal requests during optimistic sync?
A node will rely on the chain it is being synced with is a valid chain with the same assumptions as with the existing design.
As I understand, the deposit/withdrawal/consolidation requests will no longer be part of the EL block hash.
With the old design, the CL can optimistically validate the EL block hash. Once the EL syncs up, it can use the 'last valid hash' mechanism to say what portion of the chain is invalid.
With the new design, the BeaconBlockBody fields are no longer tied to the EL block hash, so it becomes possible to have bogus entries in the BeaconBlockBody that still have a VALID ExecutionPayload.
Consider an invalid withdrawal request r
included in the block N
. With the existing functionality a malicious party would need to build an ExecutionBlock and compute the block hash in order to include r
and trick a syncing node to apply this request. With this proposal it would be a matter of building a BeaconBlockBody that incorporates r
.
In both cases a syncing node can’t conclude that r
is invalid and is not a product of the block N
execution. And it will not be able to conclude that after it is fully synced if such an invalid chain is keep being extended with valid blocks.
The above case stems from the following property. If a chain has an invalid state modification and being synced beyond the point of that modification via state sync, it won’t be possible to test whether any invalid modification has been made to that chain in the past, and new valid blocks extending this chain will be applied without any issue.
Waiting to discuss with prysm teammates on this one. Could some notes on rationale be added to the description? seems very doable but I feel a bit naive on the benefits/tradeoffs ( I missed the original discussion I believe)
@james-prysm added the motivation section in the PR description
I’ve added an Optimistic sync attack opened by this proposal to the PR description. Thanks @etan-status and @ethDreamer for raising your concerns around the fact that this proposal can break the optimistic sync.
The scenario described by the optimistic sync attack felt out of my consideration when I did the initial analysis 😬️️️️️️ @etan-status this is probably the case you had in mind behind https://github.com/ethereum/execution-apis/pull/551#issuecomment-2139743799
this is probably the case you had in mind behind
Yes. The CL has no chance to verify that the requests within the sidecar relate to the ExecutionPayload, so requires to loop in the EL on each individual block. In contrast, if the requests are part of the execution block hash, the CL can verify that the requests are linked to the ExecutionPayload, and because its a blockchain any EL verdict on a later block also confirms all prior data to be valid, while latestValidHash tells you exactly from where things went wrong.
Potential problem with this mitigation is that CL clients may not send each block via newPayload during the optimistic sync thus the requests data may not always be available for the EL.
That may not be viable if the data is too far in the past, due to pruning. It's also not too performant -- right now during sync Nimbus only sends a newPayload / fcu every couple 1000 blocks to avoid interrupting ongoing sync.
This design is an alternative to the EIP-7685.
Other alternative could be to simply mirror the ExecutionPayload, namely making separate block header fields for each of the trees. That's how things were done in the past, and also how things would look if the EL block header becomes transitioned to SSZ. As I understand, EIP-7685 is primarily motivated by implementation choices that make extending EL block header with new field annoying, but we could also just go through that annoyance to have a simpler spec design that is consistent with CL and JSON-RPC, both of which have separate fields per request type.
I think we can still avoid this edge case if we just keep the requests_root
in the execution block header since we already verify the block hash. This is not significantly different than how it works now.
I think we can still avoid this edge case if we just keep the
requests_root
in the execution block header since we already verify the block hash. This is not significantly different than how it works now.
Yes, indeed requests_root
in the payload header would help. Then we would have the following checks:
1) Requests on CL side corresponds to the requests_root
in the payload, can be validated with no block execution but requires EL to do this check as CL can’t decode RLP (some clients can tho)
2) EL checks that the requests obtained from the block execution matches requests_root
in the block header, similarly to how they check the receipts_root
While with the existing design we have:
1) EL checks that the requests obtained from the block execution matches requests_root
in the block header and matches requests in the EL block body
2) engine_getPayloadBodies
for payload bodies de-duplication
After weighing in the complexity of passing requests between the layers and having EL running the same checks as today, from my perspective this proposal has marginal benefits and might not worth the engineering complexity.
Consensus layer
CL obtains deposit, withdrawal and requests of any other type in the response to the
engine_getPayload
method call in a similar way as blobs are obtained. And then CL includes obtained requests into aBeaconBlockBody
instead of theExecutionPayload
:class BeaconBlockBody(Container): randao_reveal: BLSSignature eth1_data: Eth1Data # Eth1 data vote graffiti: Bytes32 # Arbitrary data # Operations proposer_slashings: List[ProposerSlashing, MAX_PROPOSER_SLASHINGS] attester_slashings: List[AttesterSlashing, MAX_ATTESTER_SLASHINGS_ELECTRA] # [Modified in Electra:EIP7549] attestations: List[Attestation, MAX_ATTESTATIONS_ELECTRA] # [Modified in Electra:EIP7549] deposits: List[Deposit, MAX_DEPOSITS] voluntary_exits: List[SignedVoluntaryExit, MAX_VOLUNTARY_EXITS] sync_aggregate: SyncAggregate # Execution execution_payload: ExecutionPayload # [Modified in Electra:EIP6110:EIP7002] bls_to_execution_changes: List[SignedBLSToExecutionChange, MAX_BLS_TO_EXECUTION_CHANGES] blob_kzg_commitments: List[KZGCommitment, MAX_BLOB_COMMITMENTS_PER_BLOCK] # Execution layer requests deposit_requests: List[DepositRequest, MAX_DEPOSIT_REQUESTS] # [New in Electra:EIP6110] withdrawal_requests: List[WithdrawalRequest, MAX_WITHDRAWAL_REQUESTS] # [New in Electra:EIP7002] consolidation_requests: List[ConsolidationRequests, MAX_CONSOLIDATION_REQUESTS] # [New in Electra:EIP7251]
Does it make sense that the beacon block body has Deposits and DepositRequests? I'm finding that a bit odd / strange.
Edit: Discussed offline that it is necessary for the fork transition and eth1data voting period after the fork. The field is obsolete in electra, but not immediately so. Perhaps it can be removed in the following fork.
Drafts an idea by @fjl and @lightclient to surface EL triggered requests as a sidecar to the ExecutionPayload.
Motivation
Simplify the EL part of the EL triggered requests design by avoiding requests inclusion into the ExecutionPayload structure.
Consensus layer
CL obtains deposit, withdrawal and requests of any other type in the response to the
engine_getPayload
method call in a similar way as blobs are obtained. And then CL includes obtained requests into aBeaconBlockBody
instead of theExecutionPayload
:When a beacon block is being processed, CL applies the requests as it is done today and also sends them via separate parameters to the
engine_newPayload
call to get them verified by the EL.Execution layer
EL is responsible to surface each request type in a separate list in the response to the
engine_getPayload
and validate that expected requests passed from the CL viaengine_newPayload
matches actual requests obtained from the block execution.This design is an alternative to the EIP-7685.
Security and other considerations
Since EL triggered requests are a product of block execution they only can be validated when a block is being fully executed, thus this change doesn’t affect the security of neither syncing nor online nodes. For syncing nodes the validation will still be happening only when the state is fully available which is happening near to the head of the chain, for online nodes full validation will be happening on each block.
The shift takes place on the request data commitment and storage part. Instead of duplicating requests between both layers it is proposed to keep them only on the CL — the layer which this requests are targeted for.
This proposal affects mev-boost flow in a way that the requests should be sent to the CL alongside to the
ExecutionPayloadHeader
. But this would also be required if requests were a part of theExecutionPayload
as the CL would need to apply them to its state in order to build a blinded beacon block.Optimistic sync attack
There is an optimistic sync attack vector opened by this proposal.
Consider a fully synced node that has experienced an outage with the following impact on the block's availability: A: (EL, CL) <- B: (CL) <- C: (new malicious block) <- D (new block)
The node starts up with the head
B
which wasn't stored by the EL and then attempts to import malicious blockC
(a child ofB
):newPayload(C)
is responded withSYNCING
as the EL does not have the blockB
C
with the requests that were never triggered by the ELD
andnewPayload(D)
is responded withVALID
As a result of this attack the node would be tricked into following the chain containing malicious block.
It is important to say that after such an outage the EL of an already synced node will use full block execution (instead of the state sync) to catch up with the head of the chain. Thus, in the case when requests are a part of the EL block (the existing design) step (4) from the above sequence of events would instead be as follows:
D
andnewPayload(D)
is responded with{status: INVALID, latestValidHash: B.block_hash}
Which will make the CL to reject the malicious chain and revert to block
B
.If an adversary finds a way to induce an outage on the majority nodes in the network, it can be leveraged to finalize a chain with invalid EL requests.
The mitigation of this attack is for EL to keep requests sent via
newPayload
around and validate them after each executed block. Potential problem with this mitigation is that CL clients may not send each block vianewPayload
during the optimistic sync thus the requests data may not always be available for the EL.Todo