Open michaelsproul opened 1 week ago
This approach might be more compatible with validator index reuse
I don't think we should consider index reuse for now. If it ever comes it will be in a long while. Beam chain might happen in the middle
Two separate cleanups:
pubkey_bytes
field from global cache. It is rarely used and quite unnecessary because we can always load pubkey bytes from the head state, or load a PublicKey and quickly compress it.BeaconState
, and have a persistent cache that is cloned before block processing (keeps block processing lock free).
Description
Presently Lighthouse has two notions of a pubkey cache. There's the "global" cache attached to the
BeaconChain
, and "local" caches attached to eachBeaconState
.Since in-memory
tree-states
, the local caches are stored using a persistent data structure, meaning that many beacon states can share most of the pubkey cache without duplication. The idea behind this issue is to extend this structural sharing to the global public key cache.Implementation
One way to implement the cache would be for new beacon state caches to be initialised from the global cache, by cloning and making the necessary mutations. The persistent data structures (from
rpds
) then ensure that memory is sharedAnother approach would be to have a global cache that is aware of changes to pubkeys over time, and can respond with results for different
epoch
values. The beacon states could contain anArc<RwLock<..>>
reference to this cache, and make queries. This approach might be more compatible with validator index reuse should it be implemented in future, as the global cache would never "forget" about old overwritten validators. This could be advantageous when reloading old states from disk, as the cache would already contain the relevant information and would not need to be rebuilt.