sogolmalek / EIP-x

eip-x
12 stars 6 forks source link

State Verification and Efficient Data Retrieval for stateless Txs, through Portal Network #5

Open sogolmalek opened 10 months ago

sogolmalek commented 10 months ago

Motivation: Verkle trees allow very lightweight clients to consume proofs from other networks to transition from the last block to the new block. However, they cannot prove the accuracy of the new state root. If a stateless client discards all its stored information, it can still confirm the accuracy of new state roots. By doing so, a stateless client can still send transactions but cannot calculate gas estimates, perform ETH calls, or read Ethereum's state since it no longer maintains any state data. The client is limited to actions that involve transitioning from one state to the next state root, without any specific state-related functions. This is where the Portal Network comes into play. While it allows the reading of random state data, it doesn't fully mitigate the core issue. The underlying challenge persists—efficiently accessing state data remains crucial for various tasks, including gas estimation. Additionally, Verkle trees, despite their benefits, don't inherently solve problems like federated access to the state. To bridge this gap, an innovative solution comes in the form of the Portal Network.

Solution: To solve this, we introduce a stateless verifier LC (Lightweight Client) with a partial state caching mechanism to enhance the efficiency of accessing specific segments of the state. It achieves this by storing frequently accessed or important state fragments in a cache, enabling clients to retrieve them more quickly than repeatedly traversing Verkle trees.

sogolmalek commented 10 months ago

Implementation overview Of partial state caching mechanism:

  1. Cache Data Structure:

    Implement a Hash Map and Linked List combination:

    • Use a hash map to store cached entries. Keys should be derived from state fragment characteristics (e.g., account address, block number).
    • Utilize a doubly linked list to maintain the order of accessed entries for eviction. A doubly linked list is a data structure where each node contains not only a reference to the next node but also a reference to the previous node. This bidirectional linkage allows for efficient navigation both forward and backward through the list. In the context of a cache management mechanism, a doubly linked list can be used to keep track of the order in which entries were accessed, which is useful for implementing cache eviction policies like Least Recently Used (LRU).

Doubly linked list can be utilized in the context of a cache:

The advantage of using a doubly linked list for maintaining access order is that it allows for efficient removal and insertion of nodes, which is essential for LRU-based cache eviction policies. When an entry is accessed, it can be easily moved to the front of the list to signify its recent use. When an eviction is required, the tail of the list (the least recently used entry) can be removed with ease.

  1. Caching Policies:

    Choose LRU (Least Recently Used) Policy:

    • When the cache is full, evict the least recently accessed entry.
    • Update the linked list to reflect the access order for eviction purposes.
  2. Key Generation:

    Generate Keys:

    • Create a hashing function that generates keys from relevant data attributes, ensuring uniqueness and efficient retrieval.
  3. Consensus-Driven Validation:

    Validation:

    • Integrate a cryptographic validation mechanism (digital signatures) to validate the authenticity of cached entries.
  4. Frequent Access Detection:

    • Implement access counters for each cached entry.
  5. Cache Size Management:

    • Set a maximum cache size and implement the eviction process based on the basic LRU policy.
  6. Data Segmentation:

    • Identify a few state fragments that will be considered critical for caching.
  7. Cache Expiry and Refresh:

    • Attach an expiry timestamp to each cached entry with a refresh mechanism
  8. Optimized Retrieval Process:

    Cache Look-up First:

    • Before accessing the Verkle tree, check if the requested data exists in the cache.
    • If present, return the cached data immediately.

In case of cache inconsistencies or failures, participants can fall back on the ZK proofs of the latest state to validate the correctness of cached fragments. This integration ensures network resilience and the ability to independently verify state accuracy.

with this prototype design, the partial state caching mechanism can efficiently manage frequently accessed or critical state fragments within the Portal Network.