zawy12 / difficulty-algorithms

See the Issues for difficulty algorithms
MIT License
107 stars 25 forks source link

Correcting Chain Work #58

Open zawy12 opened 4 years ago

zawy12 commented 4 years ago

Chain work does not correctly determine the number of hashes performed if the number of blocks are few or if the difficulty is changing a lot compared to the hashrate. This can cause a problem in PoW blockchain considerations that depend on an accurate estimate of hashrate with only a few blocks. An example problem this causes is that selfish mining attacks can work with <50% of the hashrate if honest miners switch tips before waiting on enough blocks to see sufficient evidence that the alternate tip (of 1 or more blocks 'forked' from his current chain) has had a higher mean hashrate. "Sufficient" means the honest minest estimates it's more likely there was an honest network delay than an attack being attempted. See "Selfish Mining" footnote to see how selfish mining can be solved if honest miners adopt and enforce partial synchrony, the standard requirement for >51% security in consensus mechanisms that BTC does not enforce.

Chain work in terms of "hashes" is the sum of difficulties multiplied by a scaling (2^32 for BTC). In this article I'll assume the scaling factor is 1 so that difficulty is the average number of hashes expected to find a solution. For a large number of blocks, chain work is usually an accurate estimate of the number of hashes required, but for a small number of blocks, the number of hashes after N blocks is:

*Hashes = sum(difficulty) (N-1)/N**

If difficulty is changing a lot due to a "fast" difficulty algorithm and hashrate is constant, then the time-weighted sum of the inverse targets is slightly more accurate:

*Hashes = total_time 2^256 / average(target solvetime) (N-1)/N which is equal to Hashes = total_time harmonic_mean(difficulty/solvetime) (N-1)/N**

Neither method is perfect if difficulty and hashrate are changing, but the initial equation is more accurate.

Not correcting for the (N-1)/N gives a large overestimate of hashrate in the case of low N which allows selfish mining with < 50% of the hashrate. But applying this correction doesn't seem right either which I'll discuss below.

All this is only if current time is the last timestamp in competing tips (which is never the case) which I'll try to correct below.

The factor (N-1)/N corrects an effect caused by the exponential distribution of solvetimes. This distribution has 70% more faster-than-average solvetimes than slower-than-average. As more blocks are found (N is larger), some will have substantially longer solvetimes that cancel the effect. At N=100, the error is reduced to 1%. The (N-1)/N correction is the result of the Erlang distribution of N exponential distribution solvetimes.

Example: If you're comparing a tip with 4 blocks to one with 3 blocks, and they have the same D in every block, and their last timestamps were at the same time (and, to be more precise, that time is now, see below), the 4-block chain as 50% more chain work (and hashrate) than the 3 block chain according to the above equations instead of 33% more as normal chain work would indicate.

The effect is large if you want to estimate HR from only a few blocks. For N=2 blocks, HR is 1/2 what you would expect from the 2 solvetimes. This applies to difficulty algorithms and maybe pools which need to correctly estimate HR. For example the following BTC-type difficulty algorithm gives the perfectly correct average solvetime (difficulty is not allowed to change between changes so the harmonic mean is not needed).

N=2 next_N_Ds = HR block_time next_N_Ds = (sum previous N Ds) / (sum previous N solvetimes) (N-1)/N * block_time

Notice that at N=1 it gives HR=0. There does not appear to be any way to estimate hashrate when you have only 1 sample of the solvetime. However, in experiments with integer solvetimes and solvetimes required to be >=0, I get about 1/7 experimentally. To clarify, this difficulty gives the correct average solvetime (st) if the hashrate is constant:

st = prior solvetime T = target block time next_D = prior_D * T / st / 6

It's 6 instead of 7 because of a complicated effect from affecting the next solvetime (D) based on the prior solvetime measurement that used a different D. This difficulty algorithm implies the correct hashrate estimate from a single block is D/st/6. But this estimate is only if your goal is for your average estimate from single-block solvetime samples to be correct. For 1/2 of the block samples you're greatly under-estimating HR. The reason HR is penalized so much by the 1/6 is that some solvetimes are really fast, making those blocks appear to have a really high HR from D/st. More to the point and the math, you're measuring and dividing by solvetime to estimate hashrate and that small and wildly-varying denominator is the problem. BTW, if you assume the solvetime was not one of the 5% fastest, the correction factor is only 2.44 (instead of 6) as this integral shows. image This means 95% of the time the 1/7 is reducing the HR estimate something like 7/2.44 = 2.9x too much.

Your "typical" or median block will have a much higher solvetime of ln(2)*T. This is a ln(2)=0.69 downward-adjustment. So what's the correct estimate of hashrate for a single block? Trusting the average solvetime seems wrong. To support the median, the following difficulty algorithm gives a perfectly-correct median solvetime (if hashrate is constant) and only 5% above-average solvetime despite having a large 10% per block adjustment. This supports the idea if you want the median estimate of hashrate instead of possibly greatly under-estimating it from a single sample (which can occur for the above 1/6 factor), then median hashrate = D T ln(2) / st, which is sort of just stating the fact that the median solvetime is ln(2)*T.

next_D = D k k = 1.1 if (st < ln(2)T) else 1/1.1

Changing k to +0.5% and -0.475% adjustments per block to get the right average makes it as good as any other difficulty algorithm.

Since we sample and adjust difficulty only once per block, a variation of this median method seems to be the most mathematically-ideal approach. In an unpublished article I discuss the "mathematical perfection" of extending the idea to adjust difficulty based on the PDF (or CDF) of the solvetimes, the exponential distribution like this:

k = [ 0.5/CDF if (st < ln(2)*T) else (1-CDF)/0.5 ] ^ (1/N)

where N is the "filtering constant" that can be 1.6 times the N for ASERT below to get the same stability and speed of response with half as many "blocks stolen" during on-off mining (a metric I developed) because it doesn't over-estimate hashrate for small solvetimes like ASERT. Honest miners need to enforce accurate timestamps on each other as described in the selfish mining footnote or else they could all set their timestamps to make every solvetime the block time and the difficulty will go to zero. Notice this algo gives a >50% miner the ability to get a very large number of blocks in a short time period since it's not targeting average solvetime, but only results in the average if the solvetimes follow the PDF.

Using Prior Knowledge (ASERT) There may be another way out of the 1/7 extremism. The ASERT / NEFDA difficulty algorithm shows a way to estimate HR from 1 block while balancing the effect of too many fast solvetimes. The algorithm is: D = previous_D e^(-st/T +1) We want D = HR T so this is estimating current HR from the st of the previous block to be HR = previous_D / T e^(-st/T +1) The problem is that T is the avg solvetime, so there's a presumption that st is indicating some smallish deviation from T. This helps a lot in our estimate of HR. In the case of two competing tips, I think it might be valid for honest miners to assume HR is evenly split between the two tips so the new T is 2xT for each tip if our difficulty algorithm is a good one (not slow like BTC) that accurately estimates the recent HR. Honest miners will could calculate chain work based on this "evenly split" assumption. My first guess at doing this would be to sum the hashes for each tip like this: chain work for a block = st previous_D / (2xT) e^(-st/(2xT) +1) For st = T, this one block gives credit for 82% of the 50% network hashrate assumption. Now compare to a tip that has 2 blocks were obtained in the same time, T, each doing it in T/2. The chain work (total hashes) would be two times this: hashes = (T/2) previous_D / (2xT) * e^(-(T/2)/(2xT) +1) = 0.53 So for the two blocks, he's credited with 1.06 of the 50% we expected. So the two blocks win but only by 29% instead of 700%. In testing this it under-estimates HR a little and total work a lot.

Effects in Simple Difficulty Algorithms Hashrate is the above estimated hashes by the total_time. This is relevant to difficulty algorithms that need to estimate current hashrate to set the target. If chain work were the correct metric for the number of hashes performed, a simple difficulty algorithm would be

next_D = HR T = sum(D) / sum(solvetime) desired_block_time

but this results in an incorrect block time by a factor of (N-1)/N in a "fixed difficulty" algorithm like BTC's. The most accurate simple moving average algorithm in keeping target solvetime for wide range of N is the following, despite the implied method of measuring HR not being as good as the other options. It works better "accidentally" because of complicated effects caused by changing difficulty only after the measurement is made.

next_D = harmonic_mean(D/solvetimes) block_time (N-1)/N

In practice the above is not much different from the common SMA which is

next_target = avg(targets) * avg(solvetimes) / block_time

Notice the above is equal to the following which shows SMA's are not using chain work to estimate HR.

next_D = harmonic_mean(D) / avg(solvetimes) * block_time

Counting work from all tips When adjusting difficulty, the HR that is evident from adding up all the tips should be used.

Additional Adjustment Based on Time Since Last Timestamp

I'll assume the timestamp limits enforced on nodes are tight, say 10% of the block time. If timestamps are not tight, nodes can still observe local time when a block arrives to estimate solvetime (if the miners are not trusted to use accurate timestamps), but it can't be a consensus rule.

The only adjustment to HR I have been able to think of that includes the "time since last block" follows this rule:

If a new block shows up now and it decreases the HR from the current estimate, then assume it just showed up and apply that adjustment."

This means doing the following.

tslb = time since last block
t = solvetime
current_D = difficulty of the block that has not been solved.
// The following is not simplified to show using tslb is +1 to N.
if ....
sum(D)/sum(t) * (N-1)/N <  (sumD+current_D) / (sum(t) + tslb) * (N-1+1)/(N+1)
then ....
HR = sum(D)/sum(t) * (N-1)/N
else ...
HR =(sumD+current_D) / (sum(t) + tslb) * N/(N+1)

[1] Pieter Wuille's tweet-poll showing how this surprises people. [2] Meni Rosenfeld's paper that mentions it. [3] My article that tries to explain it to a larger audience. [4] Could a selfish mining attack with slightly < 50% somehow exploit this around the time difficulty changes?

Selfish Mining: It's possible for miners with 33% to 50% of the hashrate to profit by withholding blocks following 3 rules. A future article will show how selfish mining can be stopped by honest miners keeping local time accurate to within 2 seconds and rejecting blocks for ln(2)*block_time seconds (my guess as to what's minimally sufficient for idealized for-profit considerations if reward is the only source of profit) if the block timestamps are not -4 to +6 from local time. The +2 second difference is based on assuming maximum propagation delay is 2 seconds, which is >95% the case for BTC which has a median propagation time of about 300 ms. Even today, blocks that have had >2 seconds delay should be checked to see if its an "accidental" block withholding (selfish mining) attack. Peer time and 3rd party time source are not allowed in correctly-implemented Nakamoto consensus because its been known since the early 1980's that any consensus mechanism can't be more secure than the consensus on time. In the case of large-valued transactions that may be subject to double spends from an attacker with less <50%, the recipient needs to wait longer than normal (roughly in proportion to the value being transferred) to make sure the sender had not done a lot of hashing to get lucky on guessing timestamps or building up a longer chain. This doesn't apply nearly as strongly if, for some reason, the recipient knows the sender didn't know before a transaction that he would be sending it. It mainly applies only if the sender gets to choose when the sender expects it, which is immediately after he knows he has a successful attack.