Closed yeastplume closed 1 year ago
Some thinking and github archaeology around the 'pruning coupling' issue above:
Compaction originally removed all pruned leaves
Merkle proofs desired originally to allow spending of timelocked coinbase outputs without having access to the full block:
Merkle proof implementation here
In original implementation, where all pruned leaves were removed, was impossible to reliably generate a merkle proof for a coinbase after a rewind.
check_compact
function to maintain leaf siblings in remove log and underlying file, only purging when parents are removedIt seems here that the alternate solution here could have been to store hashes of pruned leaves with siblings instead of retaining all data.
In the current code, the wallet is not required to provide merkle proofs to spend coinbase outputs, the node checks their maturity by looking up the height in the chain instead. (Made possible as output_mmr_size
now being commit to in block header)
So Merkle proofs and 'prune coupling' changes no longer needed at this stage.
However, merkle proofs are needed for PIBD, however these could also be generated from hashes instead of stored siblings. They are also beyond the rollback horizon so there shouldn't be any rewind concerns.
Other (possibly relevant) PRs:
Questions are:
So to summarize:
output_mmr_size
added to blockheader, coinbase maturity could be validated by nodes without wallets supplying Merkle proof. Based on above I can only really conclude that the 'pruning coupling' behaviour is legacy behaviour, which may have had less of an impact when the only adverse implication was (cheap) local node storage. However, with PIBD this choice affects the amount of network data that has to be sent, particularly unneeded rangeproofs.
PIBD segmenter sends these unpruned siblings in their segment data, and the recipient needs to apply and then immediately prune them based on their presence in the output bitmap.
The coupling behavior is fairly deep rooted in the implementation, and would be very time consuming with an RFC likely needed
Caveat that I could be missing something.
Is this ready for review, or should I wait until the outstanding issues are resolved?
Is this ready for review, or should I wait until the outstanding issues are resolved?
The code as it stands is nowhere near ready for proper review, but everything about PIBD and all issues listed above are ripe for discussion and comment.
I think this is ready for merging into master, with PIBD used only for testnet (unless custom compiled). We'll encourage more people to run testnet nodes, and stop them from time to time for a few weeks. Maybe with regular forum posts to remind people.
This PR is a task tracker for PIBD work, which also handily displays all PIBD implementation related changes on the
pibd_impl
branch thus far. Outstanding tasks or issues that need investigation are outlined below, followed by a list of recently completed (and thus far unreviewed PRs) on thepibd_impl
branch. Note that considerable work was done on the segmentation and network messaging side previous to this round of work.This posting will be kept up to date as work progresses, feel free to discuss, ask questions or raise issues in the comments.
General Progress
txhashset.zip
download when running on testnet only.pibd_impl
branch, and a node has successfully synced from scratch on testnet via PIBD.RFCs
Outstanding issues
3696 Partially covers this, still need a mechanism to remove segments from the list of requested segments in
sync_state
if a peer does not respond with a segment for any reason.3698 Covers the rest of this, any issues should be shaken out during testing, (hopefully)
continue_pibd
in the main sync thread, and each sync loop compiles a list of required pibd segments and requests them if they haven't been requested. This keeps things simple and within existing design, which is good. However a separate thread is also started to periodically check how 'complete' the trees is, and kick off validation when they've complete. This will be updated as per issue below, but we still want to ensure this model will suit our needs.txhashset.zip
downloadtxhashset.zip
, now it can simply request the segments it needs and validate up to the new archive header, investigate this.Completed (Mostly unreviewed but merged into
pibd_impl
branch)3665
3667
3685
3686
3688
3689
3690
3691
3692
3694
3696
3698
3699
3700
3702
3703
3704
3705
3711