Open sogolmalek opened 10 months ago
Implementation overview Of partial state caching mechanism:
Cache Data Structure:
Implement a Hash Map and Linked List combination:
Doubly linked list can be utilized in the context of a cache:
An empty doubly linked list to serve as the order of access for cached entries.
Maintaining pointers to the head (the most recently accessed entry) and tail (the least recently accessed entry) of the list.
When a new entry is cached, create a new node in the doubly linked list and link it to the current head.
Update the head pointer to point to the new node.
When an entry is accessed from the cache, move its corresponding node to the front of the linked list.
This involves updating the node's previous and next pointers to rearrange its position.
When the cache is full and an eviction is necessary, remove the node at the tail of the linked list.
Update the tail pointer to point to the next node.
The advantage of using a doubly linked list for maintaining access order is that it allows for efficient removal and insertion of nodes, which is essential for LRU-based cache eviction policies. When an entry is accessed, it can be easily moved to the front of the list to signify its recent use. When an eviction is required, the tail of the list (the least recently used entry) can be removed with ease.
Caching Policies:
Choose LRU (Least Recently Used) Policy:
Key Generation:
Generate Keys:
Consensus-Driven Validation:
Validation:
Frequent Access Detection:
Cache Size Management:
Data Segmentation:
Cache Expiry and Refresh:
Optimized Retrieval Process:
Cache Look-up First:
In case of cache inconsistencies or failures, participants can fall back on the ZK proofs of the latest state to validate the correctness of cached fragments. This integration ensures network resilience and the ability to independently verify state accuracy.
with this prototype design, the partial state caching mechanism can efficiently manage frequently accessed or critical state fragments within the Portal Network.
Motivation: Verkle trees allow very lightweight clients to consume proofs from other networks to transition from the last block to the new block. However, they cannot prove the accuracy of the new state root. If a stateless client discards all its stored information, it can still confirm the accuracy of new state roots. By doing so, a stateless client can still send transactions but cannot calculate gas estimates, perform ETH calls, or read Ethereum's state since it no longer maintains any state data. The client is limited to actions that involve transitioning from one state to the next state root, without any specific state-related functions. This is where the Portal Network comes into play. While it allows the reading of random state data, it doesn't fully mitigate the core issue. The underlying challenge persists—efficiently accessing state data remains crucial for various tasks, including gas estimation. Additionally, Verkle trees, despite their benefits, don't inherently solve problems like federated access to the state. To bridge this gap, an innovative solution comes in the form of the Portal Network.
Solution: To solve this, we introduce a stateless verifier LC (Lightweight Client) with a partial state caching mechanism to enhance the efficiency of accessing specific segments of the state. It achieves this by storing frequently accessed or important state fragments in a cache, enabling clients to retrieve them more quickly than repeatedly traversing Verkle trees.