stellar / slingshot

A new blockchain architecture under active development, with a strong focus on scalability, privacy and safety
Apache License 2.0
415 stars 61 forks source link

utreexo: caching strategy #284

Open oleganza opened 5 years ago

oleganza commented 5 years ago

First, to make utreexo proofs usable, we should not commit them on every block. Downsides of committing too often, in order of priority:

  1. Usability: frequent updates make proofs becoming stale, increasing wasted bandwidth to re-submit the proofs. The network should have ample amount of time to propagate the proofs and verify transactions before the proof becomes stale. For r1cs proof and signatures, that's normally the case: tx signers do not step on their own toes. But utxo proofs can get stale due to everyone else's activity.
  2. SPV client's bandwidth costs. SPV clients have to track their utxos, and to do that they have to track not only block headers, but all the utreexo updates too. If updates are spaced out, overlapping merkle paths can be compressed, and transient utxos (created and destroyed in-between the state commitments) can be ignored.
  3. Somewhat costly computation of state commitment involves a lot of hashing operations, that can be slightly optimized via batching: e.g. intermediate node that changes almost every block needs only be recomputed at most once in-between commitments.

Strategies

Assuming, we commit every T seconds, or every K blocks, here is how we can optimize bandwidth:

  1. Caching top n levels of tree with memory overhead of 32*(2^n) bytes. If n=16, it's 4MB of data, while every proof saves 512 bytes of proof bandwidth (>50% for recent utxos). This trick was suggested by David Vorick.
  2. Caching all the uncommitted utxo proofs: this comes for free - we need to store them anyway to form a commitment. So if they are spent before the commitment, users do not have to transmit the proof anyway.
  3. Caching extra number of committed proofs, capped by amount. Tadge Dryja's estimate is "1.5 Gb brings down bandwidth overhead to 12%".
  4. Caching "update objects" necessary to update proofs from stale blocks up-to-speed, per @bobg's suggestion. This is not directly helping with saving bandwidth (still need to transfer the proof, although obsolete), but it helps with to trade storage for more frequent commitments without hurting usability (issue (1) above).

Which caching should be a part of consensus rules?

(1) Caching top levels is easy to enforce, but it raises the minimum required space 1000x (from a few kilobytes), while the worst bandwidth overhead without caching any utxo proofs is just 2x.

(2) Caching uncommitted utxo proofs is free - we need to store them anyway to compute the commitment later.

(3) Caching extra number of recent utxo proofs requires significantly more storage (and overhead of recomputing most of them) to bring down the bandwidth overhead significantly, so it may be the best to leave this for per-node decision, so the network can tune this over time.

Why having any of the above to be a part of consensus rules?

We not have all options, and negotiate then at the peer level?

Recipient:

1. I cache N top levels,
2. I cache all utxo proofs starting from block `x`.
3. I cache all utxo blobs starting from block `y`.

Sender:

1. Is my utxo in the fully cached list? If yes, send a pointer in that list instead of a proof or entire utxo.
2. Is my utxo in the list of cached proofs? If yes, send a pointer to a proof, and the utxo itself.
3. Else: compute the proof, strip top N items of it, send the proof.

Relay problem

Recipient need to normalize the proof with their data, so they can run through the same protocol w.r.t. another recipient, that may have completely different parameters.