This repo contains the protocol specification, reference implementations, and tests for the negentropy set-reconciliation protocol. See our article for a detailed description. For the low-level wire protocol, see the Negentropy Protocol V1 specification.
Set-reconciliation supports the replication or syncing of data-sets, either because they were created independently, or because they have drifted out of sync due to downtime, network partitions, misconfigurations, etc. In the latter case, detecting and fixing these inconsistencies is sometimes called anti-entropy repair.
Suppose two participants on a network each have a set of records that they have collected independently. Set-reconciliation efficiently determines which records one side has that the other side doesn't, and vice versa. After the records that are missing have been determined, this information can be used to transfer the missing data items. The actual transfer is external to the negentropy protocol.
Negentropy is based on Aljoscha Meyer's work on "Range-Based Set Reconciliation" (overview / paper / master's thesis).
This page is a technical description of the negentropy wire protocol and the various implementations. Read our article for a comprehensive introduction to range-based set reconciliation, and the Negentropy Protocol V1 specification for the low-level wire protocol.
In order to use negentropy, you need to define some mappings from your data records:
record -> ID
record -> timestamp
0
can be used as the timestamp for every recordNegentropy does not support the concept of updating or changing a record while preserving its ID. This should instead be modelled as deleting the old record and inserting a new one.
The two parties engaged in the protocol are called the client and the server. The client is sometimes also called the initiator, because it creates and sends the first message in the protocol.
Each party should begin by sorting their records in ascending order by timestamp. If the timestamps are equivalent, records should be sorted lexically by their IDs. This sorted array and contiguous slices of it are called ranges.
For the purpose of this specification, we will assume that records are always stored in arrays. However, implementations may provide more advanced storage data-structures such as trees.
Because each side potentially has a different set of records, ranges cannot be referred to by their indices in one side's sorted array. Instead, they are specified by lower and upper bounds. A bound is a timestamp and a variable-length ID prefix. In order to reduce the sizes of reconciliation messages, ID prefixes are as short as possible while still being able to separate records from their predecessors in the sorted array. If two adjacent records have different timestamps, then the prefix for a bound between them is empty.
Lower bounds are inclusive and upper bounds are exclusive, as is typical in computer science. This means that given two adjacent ranges, the upper bound of the first is equal to the lower bound of the second. In order for a range to have full coverage over the universe of possible timestamps/IDs, the lower bound would have a 0 timestamp and all-0s ID, and the upper-bound would be the specially reserved "infinity" timestamp (max u64), and the ID doesn't matter.
After both sides have setup their sorted arrays, the client creates an initial message and sends it to the server. The server will then reply with another message, and the two parties continue exchanging messages until the protocol terminates (see below). After the protocol terminates, the client will have determined what IDs it has (and the server needs) and which it needs (and the server has). If desired, it can then respectively upload and/or download the missing records.
Each message consists of a protocol version byte followed by an ordered sequence of ranges. Each range contains an upper bound, a mode, and a payload. The range's implied lower bound is the same as the previous range's upper bound (or 0, if it is the first range). The mode indicates what type of processing is needed for this range, and therefore how the payload should be parsed.
The modes supported are:
Skip
: No further processing is needed for this range. Payload is empty.Fingerprint
: Payload contains a digest of all the IDs within this range.IdList
: Payload contains a complete list of IDs for this range.If a message does not end in a range with an "infinity" upper bound, an implicit range with upper bound of "infinity" and mode Skip
is appended. This means that an empty message indicates that all ranges have been processed and the sender believes the protocol can now terminate.
Upon receiving a message, the recipient should loop over the message's ranges in order, while concurrently constructing a new message. Skip
ranges are answered with Skip
ranges, and adjacent Skip
ranges should be coalesced into a single Skip
range.
IdList
ranges represent a complete list of IDs held by the sender. Because the receiver obviously knows the items it has, this information is enough to fully reconcile the range. Therefore, when the client receives an IdList
range, it should reply with a Skip
range. However, since the goal of the protocol is to ensure the client has this information, when a server receives an IdList
range it should reply with its own ranges (typically IdList
and/or skip ranges).
Fingerprint
ranges contain a digest which can be used to determine whether or not the set of data items within the range are equal on both sides. However, if they differ, determining the actual differences requires further recursive processing.
IdList
or Skip
messages will always cause the client to terminate processing for the given ranges, these messages are considered base cases.IdList
range should be sent. If large, the sub-ranges should themselves be sent as Fingerprint
s (this is the recursion).IdList
of length 0 is sent because it is smaller.Fingerprint
sub-range for each of these buckets. However, an implementation could choose different grouping criteria. For example, events with similar timestamps could be grouped into a single bucket. If the implementation believes recent events are less likely to be reconciled, it could make the most recent bucket an IdList
instead of Fingerprint
.Fingerprint
range, otherwise the protocol may never terminate (if the other side does the same).The initial message should cover the full universe, and therefore must have at least one range. The last range's upper bound should have the infinity timestamp (and the id
doesn't matter, so should be empty also). How many ranges used in the initial message depends on the implementation. The most obvious implementation is to use the same logic as described above, either using the base case or splitting, depending on set size. However, an implementation may choose to use fewer or more buckets in its initial message, and/or may use different grouping strategies.
Once the client has looped over all ranges in a server's message and its constructed response message is a full-universe Skip
range (ie, the empty string ""
), then it needs no more information from the server and therefore it should terminate the protocol.
Fingerprints are short digests (hashes) of the IDs contained within a range. A cryptographic hash function could simply be applied over the concatenation of all the IDs, however this would mean that generating fingerprints of sub-ranges would require re-hashing a potentially large number of IDs. Furthermore, adding a new record would invalidate a cached fingerprint, and require re-hashing the full list of IDs.
To improve efficiency, negentropy fingerprints are specified as an incremental hash. There are several considerations to take into account, but we believe the algorithm used by negentropy represents a reasonable compromise between security and efficiency.
If there are too many differences and/or they are too evenly distributed throughout the range, then message sizes may become unmanageably large. This might be undesirable if the network transport has message size limitations, meaning you would have to implement some kind of fragmentation system. Furthermore, large batch sizes inhibit work pipelining, where the synchronised records can be processed in parallel with additional reconciliation.
Because of this, negentropy implementations may support a frame size limit parameter. If configured, all messages created by this instance will be of length equal to or smaller than this number of bytes. After processing each message, any discovered differences will be included in the have
/need
arrays on the client.
To implement this, instead of sending all the ranges it has found that need syncing, the instance will send a smaller number of them to stay under the size limit. Any following ranges that were sent are replied to with a single coalesced Fingerprint
range so that they will be processed in subsequent message rounds. Frame size limits can increase the number of messaging round-trips and bandwidth consumed.
In some circumstances, already reconciled ranges can be coalesced into the final Fingerprint
range. This means that these ranges will get re-processed in subsequent reconciliation rounds. As a result, if either of the two sync parties use frame size limits, then discovered differences may be added to the have
/need
multiple times. Applications that cannot handle duplicates should track the reported items to avoid processing items multiple times.
This section lists all the currently-known negentropy implementations. If you know of a new one, please let us know by opening an issue.
Language | Author | Status | Storage |
---|---|---|---|
C++ | reference | Stable | Vector, BTreeMem, BTreeLMDB, SubRange |
Javascript | reference | Stable | Vector |
Rust | Yuki Kishimoto | Stable | Vector |
Go | Illuzen | Stable | Vector |
C bindings | DarshanBPatel | Experimental | Same as C++ |
Go | fiatjaf | Stable, Nostr-specific | Vector |
This section lists the currently-known applications of negentropy. If you know of a new one, please let us know by opening an issue.
fiatjaf added support to fq to inspect and debug negentropy messages (see example usage):
There is a conformance test-suite available in the testing
directory.
In order to test a new language you should create a "harness", which is a basic stdio line-based adapter for your implementation. See the test/cpp/harness.cpp and test/js/harness.js files for examples. Next, edit the file test/Utils.pm
and configure how your harness should be invoked.
Harnesses may require some setup before they are usable. For example, to use the C++ harness you must first run:
git submodule update --init
cd test/cpp/
make
In order to run the test-suite, you'll need the perl module Session::Token (libsession-token-perl
Debian/Ubuntu package).
Once setup, you should be able to run something like perl test.pl cpp,js
from the test/
directory. This will perform the following:
The test is repeated using each language as both the client and the server.
Afterwards, a different fuzz test is run for each language in isolation, and the exact protocol output is stored for each language. These are compared to ensure they are byte-wise identical.
Finally, a protocol upgrade test is run for each language to ensure that when run as a server it correctly indicates to the client when it cannot handle a specific protocol version.
For the Rust implementation, check out its repo in the same directory as the negentropy
repo, build the harness
commands for both C++ and Rust, and then inside negentropy/test/
directory running perl test.pl cpp,rust
For the golang implementation, checkout the repo in the same directory as the negentropy
repo, then inside negentropy/test/
directory running perl test.pl cpp,go
(C) 2023-2024 Doug Hoyte and contributors
Protocol specification, reference implementations, and tests are MIT licensed.
See our introductory article or the low-level protocol spec for more information.
Negentropy is a Log Periodic project.