Closed lyoshenka closed 5 years ago
We currently hash in the last take-over height as well as the currently active claim. With this change are we needing to hash in take-over heights or valid heights for each claim? Or is the one take-over height sufficient?
pinging @kaykurokawa or @lbrynaut
Clarified goal: allow a 3rd party to prove the existence of a non-winning claim.
The design for this hard fork:
First, a word on the current computation: the claimID == hash(outPoint.hash, outPoint.n). However, the per-leaf hash used in the Merkle tree == hash(child[0], child[1], ..., hash(outPoint.hash, outPoint.n, last takeover height)), where outPoint in this context refers to the current winning claim at that leaf node.
Proposal A: At some particular block number the method CClaimTrieCache::recursiveComputeMerkleHash
will change its behavior. The inner hash on the per-leaf computation will change to: hash(last takeover height, claimID[0], claimID[1], etc.).
With that change in the hash computation, getnameproof
no longer returns sufficient contextual data to recompute the tree hash. We have two options to address that shortcoming:
getclaimsforname
and utilize that data to rebuild the tree. Unfortunately, it would need to be called for each leaf node in the hierarchy. It could potentially add |claim| more RPC calls.Either way, those using getnameproof
would need to be updated. I'm only aware of this usage: https://github.com/lbryio/lbry/blob/ee5be5adc80d4513fbd2fa26881833de3487dd54/lbrynet/wallet/claim_proofs.py#L20 .
Proposal B: same as above except that we would also change to use a binary tree hashing combination for the claimtrie nodes and the claims in the nodes (or just the latter). As I understand it, this is how the transactions are hashed already. See the answers here for an explanation of what I mean: https://bitcoin.stackexchange.com/questions/50674/why-is-the-full-merkle-path-needed-to-verify-a-transaction . I think this would significantly reduce the amount of data that we need to return for getnameproof
.
Open questions:
Answers discussed with @kaykurokawa :
One of my concerns on this was the ordering issue mentioned in #196 . Essentially, you can't re-sort claims in a node without first computing the effective amount (EA) for those nodes. Unfortuantely the EA isn't persisted. Hence, it's set to random bits right when the claims are loaded from the disk. The method to sort the claims does recompute the value. I'm just going to run on the assumption that the claims were sorted when they were persisted on disk, and that they don't lose their ordering when the data is retrieved from disk.
We've had new requirements on this. It now needs to implement all necessary support for this: https://www.notion.so/lbry/Q-A-on-RPC-results-verification-44888d23efa3475a90eec997f9bf3103 . It also needs to return the claim_sequence field in all claim RPC calls.
Problem
Right now, only the winning claim for each name is included in the hash of the claimtrie. Therefore resolving non-winning claims cannot be validated by SPV clients.
Solution
Include all claims in the claimtrie hash.