dashpay / dash

Dash - Reinventing Cryptocurrency
https://www.dash.org
MIT License
1.49k stars 1.2k forks source link

Implement Compressed Headers #3494

Closed PastaPastaPasta closed 2 years ago

PastaPastaPasta commented 4 years ago

As requested by the mobile team (specifically @quantumexplorer)

As described in https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017834.html

Maybe see https://github.com/jimpo/bips/blob/headers-sync/headersv2.mediawiki

For low bandwidth nodes who are doing a headers-only sync, reducing the size of the headers can provide a significant bandwidth saving.

This is extra important in Dash as we have 4x as many blocks per unit time compared to bitcoin.

The described specification for compressed headers results in a savings of 49%.

It is possible that there are dash specific space savings in addition to these.

This issue should be used for discussion about this specification, modifications we could/should make to this specification, etc.

xdustinface commented 4 years ago

The specification looks overall good to me. What @QuantumExlorer also requested is some way to "trigger the next 24 nBits to be included in the compressed headers" to be able to calculate the following ones (dependent on Dark Gravity Wave). I would just propose here to always include the nBits in the 24 first compressed headers sent to a newly connected client which requested compressed headers.

The first header in the first block_header2[] vector to a newly-connected client MUST contain the full nBits,timestamp,versionandprev_block_hashfields, along with a correctly populatedbitfield` byte.

Here i can't see why we should include the full timestamps and versions into the whole first vector and not just have the first element as full and starting from there use the compression. Anyone?

PS: I would happily take this implementation if we come to an final agreement on the specification here 🙂

UdjinM6 commented 4 years ago

nBits currently changes once every 2016 blocks. It could be entirely calculated by the client from the timestamps of the previous 2015 blocks [2].

To simplify 'light' client implementations which would otherwise require consensus-valid calculation of the adjustments, we propose to transmit this according to the Bitfield specification above.

nbits: same as last header (0 byte field) or new (4 byte field).

I think in case of Dash version this means that nBits should always be included since we adjust difficulty every block.

Here i can't see why we should include the full timestamps and versions into the whole first vector and not just have the first element as full and starting from there use the compression. Anyone?

I don't see where we should include the full timestamps and versions into the whole first vector assumption comes from, there is no such thing in the specification if I read it correctly.

xdustinface commented 4 years ago

I don't see where we should include the full timestamps and versions into the whole first vector assumption comes from, there is no such thing in the specification if I read it correctly.

Oh right.. damn 😄

QuantumExplorer commented 4 years ago

I think in case of Dash version this means that nBits should always be included since we adjust difficulty every block.

I don't think this is necessary udjin, because they can be determined based on previous block timestamps. If the value was ever incorrectly calculated this would alter the hash to no longer be valid within the difficulty.

UdjinM6 commented 4 years ago

@QuantumExplorer from the original spec:

To simplify 'light' client implementations which would otherwise require consensus-valid calculation of the adjustments,

i.e. the idea is/was that light clients would just do simple arithmetics on timestamps and hashing.

gabriel-bjg commented 3 years ago

fyi: I'm planning to work on this soon.

gabriel-bjg commented 3 years ago

There's a corner case that I do not know how to handle: when receiving a getheaders2 message with a null locator, but with a correct hashStop, is there any way at all to check if the peer has information about the previous block (the one before hashStop)?

@UdjinM6 maybe you have some ideas :D

UdjinM6 commented 3 years ago

is there any way at all to check if the peer has information about the previous block (the one before hashStop)?

Why would we have to care about it? You should simply send the requested hashStop block header just like you would do it with getheaders https://github.com/dashpay/dash/blob/master/src/net_processing.cpp#L2742-L2754 imo.

gabriel-bjg commented 3 years ago

is there any way at all to check if the peer has information about the previous block (the one before hashStop)?

Why would we have to care about it? You should simply send the requested hashStop block header just like you would do it with getheaders https://github.com/dashpay/dash/blob/master/src/net_processing.cpp#L2742-L2754 imo.

We should care as we may send the block header compressed (the block header having the hash hashStop) iff we know for sure that the client has the previous block. Should we keep it simple and always send the full header (uncompressed) for this case (i.e. null locator with valid hashStop)?

UdjinM6 commented 3 years ago

we may send the block header compressed (the block header having the hash hashStop) iff we know for sure that the client has the previous block

why is that? it's not our problem if the peer that requested this header doesn't have the previous one, it's peer's problem.

gabriel-bjg commented 3 years ago

we may send the block header compressed (the block header having the hash hashStop) iff we know for sure that the client has the previous block

why is that? it's not our problem if the peer that requested this header doesn't have the previous one, it's peer's problem.

Great! I wrongly supposed that we should somehow handle that situation and send uncompressed headers if the peer does not have the previous block. Thank you!