On a cache miss we could either fall back to computing the index as we do now, or we could prime the cache using the available beacon state. The disadvantage of priming the cache is that it delays the getPayload request to the builder/execution layer. However, we might end up needing to prime the cache anyway. If we fix #4264 then gossip verification will try to prime the cache here:
Description
During block proposal we compute the proposer index from scratch here:
https://github.com/sigp/lighthouse/blob/693886b94176faa4cb450f024696cb69cda2fe58/beacon_node/beacon_chain/src/beacon_chain.rs#L4308
Steps to resolve
We should use the beacon proposer cache, like this:
https://github.com/sigp/lighthouse/blob/693886b94176faa4cb450f024696cb69cda2fe58/beacon_node/beacon_chain/src/beacon_chain.rs#L3877-L3882
On a cache miss we could either fall back to computing the index as we do now, or we could prime the cache using the available beacon state. The disadvantage of priming the cache is that it delays the
getPayload
request to the builder/execution layer. However, we might end up needing to prime the cache anyway. If we fix #4264 then gossip verification will try to prime the cache here:https://github.com/sigp/lighthouse/blob/693886b94176faa4cb450f024696cb69cda2fe58/beacon_node/beacon_chain/src/block_verification.rs#L807-L814
Therefore I think we may as well try priming the cache if we miss. It's more future proof.