there is a reasonable estimation of the churn rate (R)
Idea:
Upon joining, a node randomly generates k key-value pairs (this works for raw key-value and content addressed values) to be it's "breadcrumbs"
The node stores those records in the DHT and records which node they stored the record with.
Periodically, the node searches the network for it's "breadcrumbs" , recording the rate at which records are lost and the rate at which the "answering" node did not match the storing node.
The node then stores more new breadcrumbs, removing local knowledge of older breadcrumbs as they are lost.
Based on the expected rate of churn, the age of each breadcrumb and the rate they are lost or moved due to backups, estimate the likelihood of this occurring due to "natual" events.
It the likelihood of records being moved due to natural events falls below a given threshold, report that you are likely eclipsed to the user and then have no idea how to respond.
Assumptions:
Idea:
Upon joining, a node randomly generates k key-value pairs (this works for raw key-value and content addressed values) to be it's "breadcrumbs" The node stores those records in the DHT and records which node they stored the record with.
Periodically, the node searches the network for it's "breadcrumbs" , recording the rate at which records are lost and the rate at which the "answering" node did not match the storing node. The node then stores more new breadcrumbs, removing local knowledge of older breadcrumbs as they are lost.
Based on the expected rate of churn, the age of each breadcrumb and the rate they are lost or moved due to backups, estimate the likelihood of this occurring due to "natual" events.
It the likelihood of records being moved due to natural events falls below a given threshold, report that you are likely eclipsed to the user and then have no idea how to respond.