Currently there is no way to know when to safely clear a set of blocks, if those blocks originate from the same hypercore you want to clear them from.
If I understood @mafintosh correctly, the way that such a mechanism could be added is if the protocol had a mechanism for requesting explicit ack messages for a certain block having been stored, and a workaround for this limitation might be to reconnect to the peer(s) to get a fresh bitfield state.
Adding such a mechanism would allow creating a hypercore on a storage limited device (my use case is a browser extension), while still maintaining good durability guarantees, assuming only a partially available archiver is/are replicating those feeds.
From an ease of use standpoint, I think the most appropriate way to expose this is some combination of live and sparse is specified, you can optionally add a numerical replicas which would automatically clear a block after that many peers have ACK'd it (similar to Tahoe-LAFS's servers of happiness), and a low level event handler similar to the current ones pertaining to peers (e.g. download). However, without adding something like long polling to the protocol I don't see how a value higher than 1 could ever be guaranteed (if only one of the replicas gets the actual block data from the origin and shares it with the rest).
Based on a cursory understanding of the protocol, it seems to me like HAVE/UNHAVE can provide the actual ack mechanism (why does UNHAVE not carry a bitfield?), but what's needed is to define a new mode with better guarantees of when they will be sent, and a mechanism to upgrade a connection to those semantics?
Currently there is no way to know when to safely clear a set of blocks, if those blocks originate from the same hypercore you want to clear them from.
If I understood @mafintosh correctly, the way that such a mechanism could be added is if the protocol had a mechanism for requesting explicit ack messages for a certain block having been stored, and a workaround for this limitation might be to reconnect to the peer(s) to get a fresh bitfield state.
Adding such a mechanism would allow creating a hypercore on a storage limited device (my use case is a browser extension), while still maintaining good durability guarantees, assuming only a partially available archiver is/are replicating those feeds.
From an ease of use standpoint, I think the most appropriate way to expose this is some combination of
live
andsparse
is specified, you can optionally add a numericalreplicas
which would automatically clear a block after that many peers have ACK'd it (similar to Tahoe-LAFS's servers of happiness), and a low level event handler similar to the current ones pertaining to peers (e.g. download). However, without adding something like long polling to the protocol I don't see how a value higher than 1 could ever be guaranteed (if only one of the replicas gets the actual block data from the origin and shares it with the rest).Based on a cursory understanding of the protocol, it seems to me like HAVE/UNHAVE can provide the actual ack mechanism (why does UNHAVE not carry a bitfield?), but what's needed is to define a new mode with better guarantees of when they will be sent, and a mechanism to upgrade a connection to those semantics?