Open pfrazee opened 5 years ago
I have a lot of experience with designing such architectures, because preventing forks in a byzantine fault tolerant way is essentially the kind of BFT consensus that is used to eliminate the double-spend problem for crypto tokens.
If there is more than one valid history, one way to reach consensus is to vote on which valid history to take.
Another approach is to try to see if there were any conflicting writes in the current block, and if so, reject BOTH transactions. Then you have to agree on when a block is truly ended. See for example this debate:
https://forum.intercoin.org/t/intercoin-technology-consensus/80/8
The problem with consensus in general is that sybil attacks can manipulate the vote, ie a lot of computers can join a particular swarm of a file they are interested in corrupting or preventing further progress by eg going silent, so remaining nodes never know whether there was a majority of nodes are faking being offline, or there is a true netsplit and a larger branch might have a more valid resolution.
(In Bitcoin, etc. it is resolved by having a dictator be found using costly Proof of Work, but this leads to an arms race of electricity waste, something Dat would hardly want to be associated with. And also, the branch may STILL be overwritten anytime if another branch with even more proof of work appears later.)
The usual way to prevent sybil attacks is to have federation from the original nodes and slow issuance of new accounts, so at no point can a network be overwhelmed by a large number of new nodes coming in. This is the approach MaidSAFE takes, where new nodes furthermore have to contribute a lot of useful activity (such as Proof of Storage of data over a long period of time) before they are allowed to participate in voting about data and other automatic governance decisions. All this, of course, should be sharded by key, but the reputation of a node is itself a piece of data that is stored by older (and more privileged) nodes only.
https://medium.com/safenetwork/proof-of-resource-on-the-safe-network-192d5c951ebc
Also read this:
https://hackernoon.com/decentralized-objective-consensus-without-proof-of-work-a983a0489f0a
And the followup (maybe, less relevant)
https://hackernoon.com/proof-of-membership-for-blockchains-1534a3f9faba
However, there is a third option, which is like a simple version of the above one. Namely, there is a central authority that makes a decision about who gets to watch which data. This is the approach taken by Ripple. They publish and sign their Unique Node List (UNL) which is the set of all nodes participating in the network, to eliminate Sybil attacks in one fell swoop. The downside is that many in the crypto community hate Ripple and call them “centralized”, but it’s hard to argue with the results: it freed them up to work on other things, and now their network is very robust, and their coin is traded on tons of exchanges.
In Ripple’s case, the UNL contains the nodes that watch ALL tokens. In Dat’s case, each Dat network would need to have a signed policy for choosing nodes that participate in a consensus, whether it is about all Dats or just a subset (such as one Dat).
At the end of the day, this is not an easy problem, and if 51% or more of the nodes are not responding, the system cannot make any further progress without risking all the progress being rolled back in the future. This means eg I can’t accept payment for an expensive item because at some point in the future, the token you paid may be yanked from me and return to you, or someone else.
The way this is usually solved is to make it more expensive to take over 51% of the network than there is value represented by the network.
Hypercore is a mechanism for publishing data logs which represent a single user. It is not designed to support global consensus on state, and so neither decentralized consensus or BFT are part of our requirements here.
More broadly, Dat is not designed to support a cryptocurrency. It’s a data sharing and publishing network. It sacrifices global strict/high consistency and atomic global transactions in order to be more efficient.
As a result, we can resolve hypercore splits using only information provided by the hypercore author. We simply need resolution protocols which fit the data structures being encoded on the hypercore, which this proposal enables.
@mafintosh and I discussed this in a WG meeting. We came to two conclusions:
I had a new idea for solving Hypercore forks that I wanted to propose. From the summary:
It's a fairly simple premise:
In practice, it would look something like:
Basically, I think we can trade immutable history and the risk of total dat corruption for (in error conditions) mutable history and the risk of some lost writes.