The Data Availability model we use requires data discovery. We rely on IPFS's Kademlia DHT, which basically allows any network participant to find a host for a certain piece of data by its hash.
Usage Description
To describe the way we use it, let's introduce a simple pseudo-code interface for it:
interface DHT {
// Find the nearest peer to the hash and ask him to keep a record of us hosting the data.
// By default, records are stored for 24h.
Provide(hash)
// Find peers hosting the data by its hash.
FindProviders(hash) []peer
// Periodically execute Provide for a given hash to keep record around.
Reprovide(hash)
}
When a block producer creates a block, it saves it and calls Provide for every Data Availability root of the block, making it discoverable and afterward available. After, any other node that wants to get the block's data or validate its availability can call FindProviders, detect the block producer, and finally access the block data through Bitswap. The block producer and block requester also call Reprovide. Overall, with the described flow, we aim for maximum confidence that data of any particular block is always discoverable from peers storing it.
What's Left
The current state of the implementation does not conform to the flow above, and these things are left to be done:
Records of someone hosting data are stored on peers selected not by their qualities but by the simple XOR metric. Unfortunately, this eventually makes different light clients store those records unreliably, as they are not meant to be full-featured daemons. Therefore, some data may become undiscoverable for some period of time.
Solutions
Basically, reproviding helps here. However, we never know when a light client leaves and data may be undiscoverable for many hours until the next reprovide happens, which would store records on another node.
Full routing table DHT client can keep records besides ones chosen by XOR metric, and the nodes running it are expected to be reliable. Thus they can fill a gap of undiscoverable hours.
Providing Time
We need to ensure providing takes less time than the time between two subsequent block proposals by a node. Otherwise, DHT providing wouldn't keep up with block production, creating an evergrowing providing queue. Unfortunately, for the standard DHT client, providing can take up to 3 mins on a large-scale network.
From this also comes a rule - the bigger the committee is, the more time the node has to proceed with providing. So naturally, the larger the network, the larger the committee is, and the larger the providing time, so altogether, these can overlap organically, not causing any issues. But if we still observe slow providing time being an issue, full routing table DHT client for block producer would be a solution as it significantly drops providing time.
Background
The Data Availability model we use requires data discovery. We rely on IPFS's Kademlia DHT, which basically allows any network participant to find a host for a certain piece of data by its hash.
Usage Description
To describe the way we use it, let's introduce a simple pseudo-code interface for it:
When a block producer creates a block, it saves it and calls
Provide
for every Data Availability root of the block, making it discoverable and afterward available. After, any other node that wants to get the block's data or validate its availability can callFindProviders
, detect the block producer, and finally access the block data through Bitswap. The block producer and block requester also callReprovide
. Overall, with the described flow, we aim for maximum confidence that data of any particular block is always discoverable from peers storing it.What's Left
The current state of the implementation does not conform to the flow above, and these things are left to be done:
Pain Points
Node churn
Records of someone hosting data are stored on peers selected not by their qualities but by the simple XOR metric. Unfortunately, this eventually makes different light clients store those records unreliably, as they are not meant to be full-featured daemons. Therefore, some data may become undiscoverable for some period of time.
Solutions
Providing Time
We need to ensure providing takes less time than the time between two subsequent block proposals by a node. Otherwise, DHT providing wouldn't keep up with block production, creating an evergrowing providing queue. Unfortunately, for the standard DHT client, providing can take up to 3 mins on a large-scale network.
From this also comes a rule - the bigger the committee is, the more time the node has to proceed with providing. So naturally, the larger the network, the larger the committee is, and the larger the providing time, so altogether, these can overlap organically, not causing any issues. But if we still observe slow providing time being an issue, full routing table DHT client for block producer would be a solution as it significantly drops providing time.
Other Possible Improvements