MyHush / hush

Hush is a fork of Zcash focused on secure communications
https://myhush.org
Other
67 stars 37 forks source link

new difficulty algorithm #24

Open zawy12 opened 7 years ago

zawy12 commented 7 years ago

The current difficulty algorithm appears to have an averaging window of N=50 which is pretty long and 3x more than Zcash. This allows big miners to come on and off, "stealing" blocks before the diff algo is able to rise, leaving small miners with a higher-than-necessary difficulty. There appears to be 5 to 10 "attacks" like this per day of miners with ~2x the network hashrate, getting 10 to 20 blocks at 1/3 the correct difficulty setting.

An argument can be made for just copying Zcash, but N=17 still invites attacks. Also, Daira has said they should have gone to < 3600 +/- timestamp limit. I think +/- 1000 is good.

N=10 is better and if it is used I recommend using my zawy v1b. Karbowanek coin has spent a lot of time with me testing it and looking for something better. N=10 in the Zcash algo may be pretty much the same but I distrust trying to use median like they do, especially with smaller N, at least without tweaking it again to make sure the average comes out correct. Str4d said it was to deal with bad timestamps, but now I know the average immediately corrects the bad timestamp's effect in the next block (if it is allowed to go negative) and it should be better due to accuracy.

But I would like to see N=8 used and call it "Gandalf's difficulty algorithm" (see below). No attacker could profit from it, assuming it's not worth getting only 2 to 4 free blocks before having to change coins again. The "free blocks" obtainable from diff algo's is 1/4 the N averaging window, but up to 1/2 N if they begin the attack when D is accidentally low, and then mine for 3/4 N with > 3x hash rate.

The following looks simple, but the link above shows how much work I've done in advanced techniques like a variable averaging window, hash-attack detection using statistics, least-squares slope prediction, significant-slope-change detection, mixtures of avg and median, recent-blocks-more-heavily-weighted, and "discrete D/T averaging" as a hashrate measurement. I was able to get all of them only as good as simple averaging, and none of them better than simply going to a smaller N for averaging.

# Gandalf's difficulty algorithm v1.  "Hash attacks shall not pass!"
# This is Zawy v1b with low N=8 (and X=5)
# Timestamp manipulation protection is not needed due to limits on next_D.  
# Negative timestamps must be allowed.
# 0.92 keeps avg solvetime on track but makes median come out 8% low. I can't find a fix.
# Increase N to 30 if you want correct avg & median, but expect 10% of blocks in small 
# coins to be stolen.   Median mixed in w/ avg will not help. 
# Do not use sum(D's) * T / [max(timestamp)-min(timestamp)] method.
#  5% block solvetimes are > 5x target solvetime under constant HEAVY attack.
#  1.3%  block solvetimes > 5x target solvetime when hashrate is constant. 
#  3% of blocks will be < 0.5x or > 2x the correct difficulty on accident.
#  When N=15 these numbers are 5%,, 1%, and 0.5%.   
https://github.com/seredat/karbowanec/commit/231db5270acb2e673a641a1800be910ce345668a

avg(last 8 solvetimes) = 1 if avg(last 8 solvetimes) < 1; # prevent divide by zero
next_D = avg(last 8 Ds) * TargetInterval / avg(last 8 solvetimes] *0.92;
next_D = 1.6*previous_D  if next_D/previous_D > 1.6;
next_D = 0.63*previous_D  if next_D/previous_D < 0.63;

The only potential problem I see is that the D will swing below 0.5 the correct D a lot, attracting an attack. But they can only get ~4 blocks at low difficulty. A question I need an answer to is this: How short of a "hash attack" can a big miner with 2x network hashrate accept before it is not worth switching coins to go work on a coin that is going to know his difficulty in X blocks? What is the upper bound on X for 90% of miners? What is the time cost in switching coins? I can go to N=5 which will result in a lot of D < 0.33 correct D, which is really attractive, but is worth 3 blocks for them to switch coins?

lludlow commented 7 years ago

https://github.com/MyHush/hush/blob/master/src/chainparams.cpp#L47

Current average is 17 blocks which is the same as Zcash, I think we experience more swings since we have a smaller network hashrate.

zawy12 commented 7 years ago

I'm mentioning only Zcash in this, but I assume it's the same algorithm as HUSH. The only difference is that HUSH is experiencing more obvious attacks, if not more. Any wider swings hush is seeing should be to it's total hashrate being small compared to Zcash, so the big miners affect it more.

I just checked Zcash and they also seem to be using what is effectively an N=50 algorithm. Zcash and HUSH difficulties look pretty darn smooth which shows they are not responding very quickly to hashrate changes. Either there is a mistake in their code that has been carried over to HUSH, or they are doing something strange with medians that's causing their algorithm to respond very slowly. As a result, they too are still getting hash attacks, at least 1 per day. Below is an example attack they suffered in the past few days. Notice how the response to the attack would have been much better if they had been using a simple N=17 average.

zcash_not_17

The following is the response with a diff algo with N=8.

zcash_vs_8

zawy12 commented 7 years ago

The closest I can simulate Zcash's algo is N=63 averaging window (I minimized the sum of the least squares of the difference between mine and theirs). The 16% rise and 32% fall limits do not seem to ever be reached. If they were, there would be smooth line during periods of sudden rise and fall in hashrate. zcash_63

But it is possible to get a lot closer to theirs with N=17 if I severely restrict how fast D can change to 1.009 and 0.991 of the previous D. This would seem to strongly indicate their problem is with the rise and fall limits being 15x and 30x more restrictive than they think. But again, theirs should show smoother lines if that were the case. The lack of symmetry they have in the limits should cause other problems if the limits were being reached. If (rise limit) < 1 / (fall limit) like theirs then avg difficulty should be too low, but it is not. That indicates the limits are NOT being reached.

zcash_rise_fall

Here's an N=17 averaging window again on this scale. The two big peaks > 2*Diff are hash attacks ( N+Stdev*SQRT(N) = 2*N => Stdev > 4 was reached out of only 1200/17 = 70 samples). Similarly, my math indicates about 75% of the smaller peaks may also been attacks (10% > 1.5*Diff when 2.5% were expected because N+1.96*SQRT(N) = 1.5*N.

So the N=17 looks ugly with wide variation, but it's mostly responding to actual hashrate changes.

zcash_17

I am mentioning Zcash only because I started looking at it this morning since you showed me they should be the same algorithm. I'm showing all the above with only HUSH in mind, since it should be the same algorithm.

QuinnMallorry commented 7 years ago

@zawy12 How are these charts being generated? They're great, and I'd like to use them to test different N values with a project. Could you shed any light on how you're generating them?

zawy12 commented 7 years ago

Hush is showing more rythmic peaks than Zcash. This indicates some miner is out there waiting for the high difficulty of his last attack to wear off, and beginning immediately again. During his off time he's probably switching over to Zcash or another small coin that uses Zcash's POW and difficulty algo. This alone is a reason to not use Zcash's difficulty algo: it just makes it easy for attackers if everyone has the same diff algo.

This is chart was done same as the last Zcash one above. The 9 spikes above diff=300 M are the same confidence interval of the 3 spikes above 8M on Zcash.

hush_vs_17

Assuming 10 attacks a day getting 20 blocks times 12.5 coins each and 50% more likely to sell them soon, it could be a selling pressure of 1250 coins per day.

zawy12 commented 7 years ago

@JamesKegel I just use librecalc spreadsheet. I obtained the data while running Zcash and HUSH as nodes and using a perl script that outputs the solvetimes and difficulty to a txt files. The pseudocode for the one that looks most like Zcash (that no one should ever use because it invites attacks, very possibly currently driving HUSH price down) is:

# Approximated Zcash (and HUSH) difficulty algorithm
N=17;
rise_limit=1.009; # never do this, needs fixing
fall_limit=0.991;  # never do this, needs fixing
T:=2.5*60;
next_D = avg(last N Ds) * T / avg(last N solvetimes];
next_D = rise_limit*previous_D  if next_D/previous_D > rise_limit;
next_D = fall_limit*previous_D  if next_D/previous_D < fall_limit;

I use an excel spreadsheet for testing difficulty algorithms and for generating simulated solvetimes under different hash attach scenarios. If you tell me exactly what your're doing I might have a spreadsheet for it. I use excel most of the time because librecalc crashes too much and is slow under heavy number crunching.

lludlow commented 7 years ago

@zawy12 I really think the overall effect is related to overall hashrate / difficulty. High hashrate = higher diff = less susceptible (well less noticeable)

I have seen some conversation lately where solo miners are moving from coin to coin as difficulty drops with massive hashing power. Sadly, I have even heard a fellow pool owner doing the same thing using nicehash

I do agree, we should come up with a better way of handling diff adjustments, but I feel we need to address spikes in hashrate differently than growth

zawy12 commented 7 years ago

@madbuda What overall effect?

zawy12 commented 7 years ago

I found the error in the Zcash terminology. The N=17 "averaging window" is given only 25% weight. The other 75% is the target time of the averaging window (which is a constant). This is why it has an effective averaging window that is almost 4x longer than N=17.

Another problem is that the code says that in order to prevent "timewarp" attacks, it uses the median timestamp of the previous 11 blocks (which is the 6th block in the past if the timestamps are sequential). They do not use the timestamps in the N=17 window, but shift them 6 blocks behind the difficulties. In other words they are using:

# Zcash and Hush difficulty algorithm (exact when timestamps are sequential)
next_D = AVG(D) * 150 / ( 0.75*150 + 0.25*AVG(ST) )
D = previous 1 to 17 difficulties 
ST = previous 6th to 22rd solvetimes

The 16% upper and 32% lower limits are applied to the denominator and are never reached due to the 75% dilution of the "N=17" measurement.

The delay caused by seeking a median is not needed. The comments say it is needed to prevent timewarp attacks. The classic Bitcoin timewarp attack is not possible when using rolling averages. Timestamps can be manipulated to lower difficulty, but unless they are getting >50% and manipulating most timestamps, just using the correct timestamp instead of looking 6 blocks in the past will cause only 1 block to have the wrong difficulty, and it is immediately corrected as soon as the next correct timestamp is given. The size of the error caused by the wrong timestamp on the 1 block is limited by Gandalf's limits. Median does not prevent the =50% problem because after 11 blocks (or less if he has > 50%) the attacker should "own" the median timestamp, having the same steady-state result as when the "median delay" is not used. With medians, it takes longer to see the bad actor, but it takes longer for the effect to go away. After N blocks, the bad timestamps pass out the window which has the reverse effect (if =~50% and not too much >50%). In other words, after N blocks of 50% large forwarded-stamped times (that drive difficulty down) have come in the front of the window, it would start seeing the 50% correct times (with large negative solvetimes) pass out the rear of the window, counteracting any new forward-stamped times coming in the front.

The N=63 and 6 blocks-too-slow errors reduce profits for dedicated miners and probably causes an extra sell pressure. If all the "ill-gotten" gains (which is 10% of daily volume) are selling pressure and if 1/2 the cryptopia volume is from back-and-forth day or week traders, and if "permanent sell pressure" is 1/2 of the permanent buy/sell activity then 40% of the "permanent sell pressure" is from the slow difficulty. Another effect is that if a better difficulty algorithm raises prices, it is a positive feedback that encourages more permanent mining, making a non-perfect difficulty algorithm a little tougher to cheat with on-off mining.

Here are links to the code. I hope it can be reduced to a one-line for loop. Constants Core calculation Median Limits on D change

The 1st chart below shows the above simplified equation above for the bird's nest is correct. The black line hidden under the orange line is actual HUSH difficulties. The second chart shows how the median is delaying the difficulty response by 6 blocks. Actually it looks like 5 behind instead of 6 for some reason.

zcash zcash_error

The above are the reasons I would like to see my Gandalf difficulty algorithm implemented. Less aggressive versions of this have been used in Sumokoin and Karbowanek and maybe 1 or 2 others.

Do not be tempted to use the simplification below that others have used. It is used in the Zcash algorithm which may have prevented people from discovering the median is not needed. Do not throw out or prevent negative solvetimes. They are the corrections to bad timestamps.

next_D = sum(D's) * T / [max(timestamp)-min(timestamp) ] # don't do this! Superficially it looks like what I want because the N's seem like they can cancel: next_D = avg(D) * T / avg (solvetimes) = sum(D)/N * T / [ sum(solvetimes)/N ] But max(timestamp) - min(timestamp) <> sum(solvetimes) when timestamps are out of order. The first one does not allow negative solvetimes to cancel forward-stamped times. Example: 5 Blocks solved in same time = 1 with 1 bad timestamp "10" trying to lower difficulty: Timestamps: 0, 1, 2, 3, 10, 5 with apparent solvetimes 1,1,1, 7, -5. First denominator gives wrong (10-0)/5 = 2 avg solvetime and second gives correct (1+1+1+7-5)/5 = 1 avg solvetime.

zawy12 commented 7 years ago

This is background information for any future HIP that may seek to change the difficulty algorithm.

I'm having trouble proving N=8 is harder to hack than Zcash's N=17 (aka "N=63"). I can simulate attacks against Zcash's algo and get 20 blocks per attack, 10 times per day. This appears to be what they are doing. I was not able to do better. It can throw the difficulty into an oscillation that allows attacks more often with a lower starting difficulty, so longer attacks are not as good, despite the difficulty not rising much even after 30 blocks.

Similarly with N=8, I can attack for 7 blocks when D goes low, again throwing it into an oscillation, getting block at low D and leaving others a higher difficulty. The situation is not necessarily better if they do it 25 times a day instead of 10.

So I can get 20x10 = 200 blocks at low D per day with "N=63" and 7x25 = 175 at "N=8". With N=8, the D starting point can be lower, so the net profit attainable is the same. Notice in a chart below that N=8 leaves constant miners with higher difficulty than the "N=63" attacks. Those 3x difficulties only last a few blocks. In both cases about 33% of the day's blocks are obtained at 1/2 the correct difficulty. This means 66% of the blocks are at a difficulty that averages 15% above the correct difficulty. Average solvetimes in both cases remains accurate.

In these examples, the attacker has only 1/2 the hashrate, so he gets only 100 the 200 blocks mentioned. Larger attackers do not change any of this except they get close to the full 200 blocks. For N=8 and 2x attacks (1x network hashrate) they get only 3.5 blocks per attack.

Concluding that N=8 is not better assumes they know how to do this and that there there is no lost time in switching, so N=8 could be a lot better if they lose 2 blocks in switching their 2.5 MH/s (Hush attackers) to and from other coins. Another potential benefit of N=8 is that they need to wait for a low period to begin the attack, but with "N=63" they can begin almost any time, so it's easier to switch around coins. They need to leave a coin at the same time it is easier on another coin, so there could be an added cost in waiting.

I had been saying the attackers were "3x" the base hashrate (I meant 2x above base). But the charts did not look exactly the same. So I made a 2x attack (1x over base) trigger on different values of D and the statistics and chart looked more correct. That's the last chart below.

Actually advertising the hack invites big miners to compete against each other, cancelling all of their profit and all of the harm if about 5 are constantly trying to do it. A similar thing can be said for "N=63" so this point is moot in terms of trying to decide which is better.

In the charts below, every grey area that is not under the blue is when the difficulty is set too low. The taller the exposed grey area, the more the D is too low. The exposed grey areas above 1 are attacker profit.

Switching from Zcash to N=8 (N=4 is possible) hinges on their losses in switching to and from the coin.

There are no tricks with the algorithms you can do to "fix" this. Straight averages are the best.

Zcash algorithm is incredibly slow:

h6

Zawy v1b N=8 (Gandolf) seems awesomely fast and great. Notice that most of the grey area (an attacker's profit) is covered up pretty good.

zawy_n 8

....and that the Zcash/HUSH algo doesn't even seem to know the attacker exists in the next chart. This is a "dumb" attack that does not trigger on low D and it ends up oscillating to its own disadvantage. Notice it starts hashong on peaks in stead of valleys. See last chart for a smart attack that begins as the difficulty is dropping.

zcash

The 5x swings in the chart below for N=8 (green) occur 2 or 3 times per day and look worse than the Zcash algorithm (blue), but they are not really a bad thing. These wide swings are unavoidable if a fast response is desired. The following chart is for a constant hashrate so the 5x swings mean it will take an average of 5x the desired solvetime for 2 to 4 of blocks in those peaks. 34% of those block will take > 10x.

zcash_8

It is still possible to hack zawy v1b N=8 if you can switch on and off without losing more than 1 block per switch. All that grey area is not good.

zawy_hack

But they are currently doing the same thing on Hush. The following is a simulation that gives the same measures as HUSH's actual data.

zcash_hack

zawy12 commented 7 years ago

Looking at the statistics again, I can only prove 3.5 attacks per day in the last 6 days at 2x to 3x hash power above base level, not 9 or 10 at 1x above base level. They're probably getting about 60 blocks per day, still at my old estimate of 10%. Due to being able to hack my own difficulty algorithm where it can be cheated as much as longer averaging periods I'm not going to push a change for N=8 unless it can be shown attackers lose 2 or more blocks in switching. It's possible to go to a 6 block averaging window, but 2 blocks per hour would reach 1/2 the correct difficulty on accident, giving them plenty of choice times to begin an attack to get the first block "free".

As I've shown above, it's hard to prove any type of averaging is better than even this terribly slow "N=63". But it is so hard to believe that such an unresponsive algorithm is a good thing, that I still recommend change. I've just realized we can just change constants without rewriting code. Karbowanek coin has been happy with a real N=17 in Zawy v1 (simple average) since their hard fork last november due to cryptonote. Somewhere on bitcointalk is a chart of the difficulty before and after Zawy v1 with N=17 was implemented. What I'm doing below is basically Zawy v1b with N=12. It could be done with N=4 to N=50.

Simulation again shows 200 blocks per day (~ 1/3) can be obtained with with a smart trigger as above when using N=12 as it was with N=8 and "N=63". Since attackers are not maximizing what's possible, they may abandon what they're doing completely with a smaller window.

All my comments are optional reading. To summarize, by changing these constants I am changing the algorithm:

# Hush/ Zcash
# ST = solvetime
next_D = AVG(past 17 D's) * 150 / ( 0.75*150 + 0.25*AVG(past 17 ST's delayed 5 blocks) )

# Zawy recommended change by changing Zcash constants
# 1.05 keeps average solvetime better
#change the "0" and "1.05" if you want a slower response like Zcash.
next_D = AVG(past 12 D's) * 150 / ( 0*0.75*150 + 1.05*AVG(past 12 ST's) )
# approx +/- 25% limit per block rise and fall.

Summary of the 3 places to change constants without all my text below:

1) Eliminate Median Time Past. Use enum { nMedianTimeSpan=1 };
instead of enum { nMedianTimeSpan=11 };

2) Shorten window size, use symmetrical allowed rise and fall

consensus.nPowAveragingWindow = 12; 
consensus.nPowMaxAdjustDown = 31; 
consensus.nPowMaxAdjustUp = 23.7;

3) Remove the extreme 75% "tempering" or dilution of the average. Use: nActualTimespan = 1.05*nActualTimespan; instead of nActualTimespan = params.AveragingWindowTimespan() + (nActualTimespan - params.AveragingWindowTimespan())/4;

Links to code again: Constants Core calculation Median Past Timestamp (MPT) Limits on D change

# The following change does away with the delaying the timestamp averaging. 
# Timestamp error protection is provided by the rise and fall limits.  
# The 11 comes from BTC timewarp problem
# that is not applicable to rolling averages. 
# If a bad timestamp comes in, a good timestamp in th next block erases the negative effect.
# If X bad timestamps come in a row, then X good ones erase their effect.

# old:
 enum { nMedianTimeSpan=11 }; 
# new:
enum { nMedianTimeSpan=1 };  # I assume nothing in the code will make this fail

[ edit: the above change shuold probably not be made unless another change is made to the code. See next comment ]

# The Zcash limits are never activated and if they were it could potentially cause an 
# asymmetry problem.  The these new limits will be important due to other changes.
# They will provide protection against 4 negative timestamps in a row, which even in 
# Zcash's crazy first days I do not believe occurred.  Even 10 negative timestamps in a row 
# will cause a problem for only a few blocks. 

# old limits
# min allowed diff (aka POW) decrease per block = 1/(1+0.32) 
# max allowed diff (aka POW) increase per block =1/(1-0.16)
# consensus.nPowAveragingWindow = 17;
# consensus.nPowMaxAdjustDown = 32; // 32% adjustment down
# consensus.nPowMaxAdjustUp = 16; // 16% adjustment up

# new limits
# If my numbers are not used, this still needs to be made symmetrical  
# up = 1- 1/(1+down)   
# The previous asymmetry goes back to Digishield who never realized 
# their oscillations were caused by the asymmetry.
# They made it asymmetrical because they wanted a faster return to 
# normal hashrate after attacks.
# Digishield v3 and/or Zcash "fixed" the oscillations by diluting the N=17
# so much that the limits were never reached. 
# The source of my limits are:
# down = (5x attack max typical)^(-2/N) = 0.765 via Zawy v1b
# but here "down" = (1 - 1/0.765)x100 = 31
# Zawy v1b up = 1/0.765 = 1.31 but here:  up = 1- 1/(1+.31)x100 = 23.7

consensus.nPowAveragingWindow = 12; 
consensus.nPowMaxAdjustDown = 31; 
consensus.nPowMaxAdjustUp = 23.7;

# old
# this is the line that dilutes the Zcash N=17 to an "N=63"
nActualTimespan = params.AveragingWindowTimespan() + (nActualTimespan - params.AveragingWindowTimespan())/4;

# new
# this is the big change that will make it more responsive and unstable
nActualTimespan = 1.05*nActualTimespan; 

# Zcash has a timestamp limit somewhere of something  like +/-3600 from previous timestamp.
# +/-  1200 would be more reasonable, but the POW limits above prevent timestamp limit from
# ever becoming a factor. The POW limit
zawy12 commented 7 years ago

The Zcash (and Digishield) code uses int64_t nActualTimespan = nLastBlockTime - nFirstBlockTime; which is used to get an average solvetime (after effectively dividing by N). However, when there are negative solvetimes due to some timestamps being before previous timestamps, the negative solvetimes are not accounted for by this method. The average comes out incorrect. My changes above will be more accurate if the average is calculated correctly because the first correct timestamp erases the effect of previous bad timestamps. I've posted examples of the error in other threads. The error might be substantial if it's not corrected.

Zcash's "incorrect" average does not cause an incorrect final average in the Zcash code because the MTP code (see note) prevents Zcash's false average from being incorrect: it never sees a negative timestamp. But the use of the MTP code comes at the cost of delaying the average 5 blocks.

So, I can't change the 11 to 1 in the above code for the MTP unless I also fix the average.

note: Median Time Past - see link to "Median" in previous comment. This code goes back to Bitcoin trying to stop timewarp attacks which do not occur in rolling averages. Although it appears to handle negative timestamps well, the correct average is better because it changes only 1 block to the wrong setting, quickly corrects, and thereby dose not cause a 5-block delay.

zawy12 commented 7 years ago

Here's an algorithm to directly penalize "hash attackers" and reward constant-on miners. I'm putting it on the Karbowanek coin page because they will probably test it if not implement it. In my testing it did not need any modification (the average solvetime came out correct).

https://github.com/seredat/karbowanec/commit/231db5270acb2e673a641a1800be910ce345668a#commitcomment-24037066

radix42 commented 7 years ago

awesome, please report back any results Karbowanek get!

On Fri, Sep 1, 2017 at 6:02 PM, zawy12 notifications@github.com wrote:

Here's an algorithm to directly penalize "hash attackers" and reward constant-on miners. I'm putting it on the Karbowanek coin page because they will probably test it if not implement it. In my testing it did not need any modification (the average solvetime came out correct).

seredat/karbowanec@231db52#commitcomment-24037066 https://github.com/seredat/karbowanec/commit/231db5270acb2e673a641a1800be910ce345668a#commitcomment-24037066

— You are receiving this because you were assigned. Reply to this email directly, view it on GitHub https://github.com/MyHush/hush/issues/24#issuecomment-326711775, or mute the thread https://github.com/notifications/unsubscribe-auth/AF9e0J39SV67-U7-kt9dHpNCeD6PiPnnks5seKkggaJpZM4O8qon .

leto commented 7 years ago

In general, a big :+1: to our network rewarding "stable" miners and making jumping on and off the network less attractive, via however we can do that.

zawy12 commented 6 years ago

Here's the algorithm I'm recommending to everyone.

http://zawy1.blogspot.com/2017/11/best-difficulty-algorithms.html

zawy12 commented 6 years ago

We've developed a new algorithm that's the best of the best and allows integer math on the target.

https://github.com/zawy12/difficulty-algorithms/issues/17

leto commented 5 years ago

@zawy12 can you give guidance on the exact parameters/code to use so we can implement your latest LWMA-3 in Hush?

zawy12 commented 5 years ago

I really don't have a LWMA-2 or 3 "ready to go" for BTC/Zcash clones. There was a Zcash clone working on LWMA-2 but I can't remember which one. Here's the code for LWMA for Zcash/BTC clones:

https://github.com/zawy12/difficulty-algorithms/issues/3#issuecomment-388386175

zawy12 commented 5 years ago

PS, if you're making a POW where difficulty needs to decrease a lot (target needs to increase a lot) then setting a guess for target that is in the correct range for N blocks is a good idea if getting the chain stuck is a possibility. The guess would be in the DA and of the form

If (height > fork_height && height < fork_height+N ) {
   then next_target = target_guess;
}

Also, I think it would be neat to rescale the difficulty so that it is 1/1000 of the hashrate so that everyone can immediately see the current hashrate estimate of the difficulty algorithm. The official hashrate is based on 144 blocks or something like that and is more stable but does not change as fast. To make difficulty scale to the hashrate, change powLimit max, where it is, to:

powLimit = 2^256 / 1000 / solvetime

In hex this is: 00006FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF

In ZEC the powLimit is:

powLimit = uint256S("0007ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff");

Hush looks like the exact same number but it isn't because the 3 leading zeros in HUSH's line below were inserted for no apparent reason by some original dev who needs to be flogged. It's 67 hex places instead of the 64 (32 bytes) for a hash. https://github.com/MyHush/hush/blob/343ee57c9021e184a3e1e0f4b478a4f898ce835f/src/chainparams.cpp#L47

The above numbers mean ZEC and HUSH difficulty's are multiples of hashrate like this:

D = HR * 150 * 2^256 / powLimit 
D = HR * 150 * 2^(leading zeros) 

HUSH has 1 leading zero when the 3 excess hex zeros are removed, and ZEC has 13.