rgb-archive / spec

[OLD!] RGB Protocol specifications for Bitcoin-based digital assets
https://rgb-org.github.io/
147 stars 26 forks source link

Nested proof structure results in very large proofs for transfers with multiple inputs #91

Open chm-diederichs opened 5 years ago

chm-diederichs commented 5 years ago

I've uploaded a simple example proof together with a module for generating branched scripts. https://github.com/chm-diederichs/rgb-proof-examples

proofGen(depth, branch) generates a proof that is depth transfers away from the root issuance proof, with each non-root proof taking branch identical inputs defined as proofGen(depth - 1, branch).

The recursive branching structure is oversimplified, but provides useful heuristics. The size of the proof scales as the sum of the lengths of upstream branches. The example given shows the output of proofGen(7, 3) printed as a JSON object. The file size is 2.2MB, but the encoded proof comes to 393kB.

The above parameters were chosen so that file sizes were convenient, but proof size quickly explodes as depth or branch are increased, e.g. proofGen(14, 3) yields an encoded proof 860MB in size.

I understand this recursive structure is the extreme case, but with assets such as USDT being traded over lightning the degree branching would be significant nonetheless and proof sizes may grow unreasonably over time, but maybe my intuition about this is off?

It should be noted that in cases where different assets are transferred in the same proof, later parties could prune off the proof chains of assets they are not interested in, but this does not help in the case of multiple inputs for a single asset.

Possible solutions are:

  1. Periodic return to central issuer who could validate a large chain of proofs to then burn and reissue the asset with a virgin proof.

  2. Each party must only verify the previous proof in the chain (i.e. the recipient verifies the donor's ownership) before accepting the trade.

    • Potential issue: a malicious party could transfer an asset to themselves and accept an invalid proof, which future buyers could not check
  3. A zero-knowledge protocol to validate proofs between parties and reduce the number of proofs needed to be passed along (this is proposed as a future optimisation rather than an immediate option).

dr-orlovsky commented 5 years ago

Well, of course if you put the exponential function (like depth^branches) you will get an exponential explode in the proof size. The reality is that the proof tree even for USDT will be sparse: you will have much smaller branching over the tree, and you will have a lot of aggregating operations. So the usual estimate for such DAG will be ln(depth^branches), and with each proof having size <1kb with average branching rate 1.5-3 and the depth of 1000 you will still have the size for the whole proof for some random USDT user proportional to the factor of 10, i.e. << 1MB.

dr-orlovsky commented 5 years ago

I think we need to create more real-world like simulation to examine the issue; for now I am leaving it for the next version of the spec.

dr-orlovsky commented 5 years ago

With the new proof structure from the latest v0.5 revision the typical proof will be above 100 bytes, so we need to re-estimate the assumptions on the modelling

dr-orlovsky commented 4 years ago

This issue will be partially solved by the introduction of zero-knowledge multi-message commitment structure, where different assets under the same transfer will be hidden behind Pedersen commitments; i.e. the histories of multiple assets will never cross in a way that it will increase the size of the offchain data for some of the assets. See more here: https://github.com/LNP-BP/lnpbps/blob/master/lnpbp-0004.md