Open zawy12 opened 6 years ago
Thank you, this one is on our radar as we have updating current algo on roadmap. I was curious about Simplest difficulty algorithm too.
There's no problem with the Simple DA. The LWMA is only marginally better. It's a little slow if T=600, but should work good with your T=120. Just now I lowered the N=35 to N=30.
You should remove the timestamp limits if you set
CRYPTONOTE_BLOCK_FUTURE_TIME_LIMIT = 7*T
We have T = 240. I will try Simple DA again, so far I got it lowering difficulty to zero. Must be some error in my implementation - I was trying Digishield version though.
I often make mistakes in converting my spreadsheet algorithms to the pseudocode, but I can't see an error in this case.
No, as I said, I suspect it is I made mistake :)
With T=240, use N=7 and adjust=0.983 for that one. This will make it a little faster. In terms of an SMA, this is like N=50 but better. LWMA is only a little bit better.
We need to investigate how the network will behave with CRYPTONOTE_BLOCK_FUTURE_TIME_LIMIT = 7*T
on advancing to daylight saving time. It depends mainly on correct time settings of pools servers so probably it's will be OK because pool servers will most likely have correct clock settings.
I'm having a discussion with Monero on this future timestamp limit change.
I don't think Monero gonna change this, it is not affecting them. I think I know what mistake I made in Simple DA - I was using target instead of / as difficulty 😅
5 Monero clones are having really good success with the new algorithm. Your difficulty will be a lot more stable with fewer delays than the old one. But if you use EMA instead of LWMA, it will be nearly as good (I'll probably be the only one who notices a difference) and it will be good to have data for EMA. EMA is more mathematically pure.
I made implementation of LWMA in the meanwhile and EMA (Simple DA).
It looks like both can not work from scratch (for a new coin), i. e. it needs to be kickstarted by default cryptonote algo then switched to. If difficulty somehow is dropped to 1 it can not climb up and will fall into loop of difficulty 1. 😕
Thanks for pointing this out. I'll make a note in the algo. To make the code simple, I would give LWMA an arbitrarily low difficulty constant if the height is less than N. Just "give" the first N blocks away. Similarly for EMA, it just needs an initial value. They correct fast. A lot faster than Digishield that takes 500 blocks.
I am not sure I made EMA correcly because the formula is for target and I changed it for difficulty
If you invert prev_target and next_target to be difficulties, it should work. But check to make sure there is not a problem when ST = or < 0.
It looks correct to me.
Looks like previous double_t for k part of the formula is better
Oh, yeah, k needs to be floating, not integer. There should be a way to make everything integers if yuo wanted. All my spreadsheet testing is with the difficulty form, so you have what I tested the most.
Looks like double_t k = N + ST / T / adjust - 1;
instead of int64_t k = static_cast<int64_t>(N + ST / T / adjust - 1);
is more precise
See LWMA for a recent change. I've re-written the pseudocode to be more easy to understand, and made the difficulty a harmonic mean instead of arithmetic mean to get more accurate avg solvetime.
I made version with your recent changes, still testing it
Thanks. I'm glad to see someone using my new terminology. I'm going to highlight your code as the "reference implementation".
Your N in k needs to match with the N of your loop. Please add the following as a comment:
T= target_solvetime;
N = int(45*(600/T)^0.3));
Several coins have copied code but not changed their N. One is using N=720 and another is using N=17! That's like N=8 in an SMA. They have very exciting days in difficulty. (Brazukcoin)
Your T=240, so your N needs to be 60 instead of 67.
I also made harmonicMeanDiff += diff / (n - 1);
because when we calculate difficulty and solve time by subtracting the previous from the current one, N is off-by-one
I used your forumula to get 66, and to compensate that off-by-one increased to 67, but apparently it was different formula
We need to make sure all is implemented correctly
Some coins also are sorting timestamps for unknown reason.
Hmmm, I need to think about what sorting does.
I think I changed the N formula which is why you think N=66 or 67. N=60 is safer than N=67.
N = int(05+45*(600/T)^0.3)) = 60
Your N in k should be same as N in the loop, so k is wrong if loop is N-1. It is better to change your
for (int64_t i = 1; i < n; i++) {
to
for (int64_t i = 1; i <= n; i++) {
And make sure timestamps[n] is the most recently solved block and i=1 is n-1 blocks before it. For example if N=3, then oldest block is N=1 which is N-1 blocks before N=3.
Thanks for mentioning some are sorting the timestamps. It's a major security hole. I've warned everyone that I have contact info for. I saw a sort when I was copying and pasting everyone's code, and I thought "ugh" but I was in a hurry and didn't stop to think about it.
I think sorting was just copied from default cryptonote and our first (and current active) implementation of your formula.
If I change for (int64_t i = 1; i <= N; i++)
the last calculated timestamp and difficulty causes an error because of overflow as there's no such element to substract from n - 1
You have to expand your timestamp array to N+1 to get that first solvetime.
The correct loop usually is (i = 0; i < 5; ++i)
where as we have (int64_t i = 1; i < n; i++)
to get the element to subtract from.
Then use int64_t i = 1; i <= n; i++
It won't work this way. If i <= n
the last element in array of solvetimes and difficulty will be incorrect (solve time will be lower limit, and diff will be overflow). We have timestamps and cumulative difficulties, and to get solvetimes and corresponding difficulties we have to substract from currenct element in the loop the previous one that provides us with 59 elements in arrays of solvetimes and difficulties instead of 60. To compensate I changed N in formulas to n - 1 in calculations of k and harmonicMeanDiff. And to get correct 60 elements I need to assign N as 61.
I'm saying I would prefer to see the arrays sized to N+1 and then calculate k and run the loop as N instead of N-1.
I made correction of divisor k.
OK, I'll modify your code to publish it as a "reference implementation"
I am not sure about this change and how to explain 😕 😃 Maybe you already have taken this into account and my attempt to correct the formula is not needed. Let me try to explain. Because of this substraction of prev. timestamp/cumulativeDifficulty from current one I decided that N in corresponding calculations in the loop (divisor N*(N+1)/2)
and avgTarget += 1/difficulty[i]/N
) should be corrected to correspond array smaller by this one missing element, i.e. reduced N by one. We will always have N-1 solvetimes and difficulties. And I decided not to implement N+1 elements in array of timestamps and cumulative difficulties though it's possible, but less complicated would be just change N to N-1 in one place in a difficulty calculation loop than in multiple files. It is because const DIFFICULTY_WINDOW or N is used in multiple places. By the way, why here is 1/difficulty[i]/N in your formula? I have just difficulty[i]/N
.
I changed DIFFICULTY_WINDOW_V3 i.e. N to 61 instead of 60 to get 60 elements after substraction in the loop.
I was editing this morning and made a mistake. I just now corrected it. Do you or karbowanec have discord for us to converse? Or send me an email at zawy@yahoo.com since I lost yours.
I would like N to be the same as the pseudocode so that the code can look like the pseudocode as much as possible. I would do this:
if (timestamps.size() > N) {
timestamps.resize(N+1);
cumulativeDifficulties.resize(N+1);
}
Then just use N as in the pseudocode. But I think you understand this is what I was saying. But I don't understand how this is more complicated.
I understand but can't do that because into difficulty calculation timestamps and cumulative difficulties are feeded in array of N. As I said it is more simple to change in the loop than in diffrent files.
If pseudocode would look like
// height-1 = most recently solved block
for ( i = height-N; i < height; i++) {
solvetime[i];
if (solvetime[i] > 7*T ) { solvetime[i] = 7*T; }
if (solvetime[i] < -7*T) { solvetime[i] = -7*T; }
j++
// The divisor N*(N+1)/2 = sum(j=1 to N) normalizes it to an LWMA average.
LWMA += solvetime * j / (N*(N+1)/2);
avgTarget += target[i]/N;
# if using difficulty instead of target:
# avgTarget += 1/difficulty[i]/N;
}
no changes in the divisor N*(N+1)/2
and avgTarget += 1/difficulty[i]/N
would be needed. But because of this solvetime = timestamp[i] - timestamp[i-1]
and diff = cumulativeDifficulties[i] - cumulativeDifficulties[i - 1]
I think N should be adjusted to --N in these two places in the loop because we have N - 1 solvetimes and difficulties in the loop. Am I wrong?
That pseudocode is wrong. Here's current pseudo code
// Loop over N most recently solved blocks
// height = most recently solved block number
for ( i = height-N+1; i <= height; i++) {
// Solvetime must be a signed integer!
solvetime = timestamp[i] - timestamp[i-1];
// The following is symmetrical protection against bad timestamps.
if (solvetime > 7*T ) { solvetime = 7*T; }
if (solvetime < -6*T) { solvetime = -6*T; }
j = j +1;
// The term (N*(N+1)/2) = sum(j=1 to N) which normalizes it to an LWMA.
LWMA += solvetime * j / (N*(N+1)/2);
sum_inverse_D += 1/difficulty[i];
}
// Keep LWMA sane in case something unforeseen occurs.
if ( LWMA < T/10 ) { LWMA = T/10; }
harmonic_mean_D = N / sum_inverse_D;
next_difficulty = harmonic_mean_D * T/LWMA * 0.998
To answer your question, yes you seem wrong: for ( i = height-N; i < height; i++) {
has N steps in the loop. And so does the new version for ( i = height-N+1; i <= height; i++) {
It's not about steps, it's about solvetime = timestamp[i] - timestamp[i-1];
and difficulty = cumulativeDifficulties[i] - cumulativeDifficulties[i - 1];
is less than N. The last version where we have N+1 in for ( i = height-N+1; i <= height; i++) {
now should be correct as we got one missing element. I am not good at math so take my remarks with a pinch of salt. 😄
We have discussed so much stuff that was a lot complicated than this without near as much confusion. When i=1, timestamp[i-1]
and cumulativeDifficulties[i - 1]
should both be in the array. We loop over N, but we get data from N+1 blocks because we reference two blocks in each loop. We loop over 1 to N but get data from 0 to N.
Yes, I am rewriting it. N + 1 in the loop is doing the job. It means that those who do not compensate are not getting correct difficulty.
lol. This is crazy. I can't figure out what problem you're seeing with N+1. What's wrong with this?
timestamp[]
and cumulativeDifficulties[]
are arrays with elements 0 to Nsolvetime[1] = timestamp[1] - timestamp[0]
and
difficulty[1] = cumulativeDifficulties[1] - cumulativeDifficulties[0]
What? sum_inverse_D is needed to get harmonic_mean_D. This is right:
sum_inverse_D += 1/difficulty[i];
...
harmonic_mean_D = N / sum_inverse_D;.
It's all about poor implementations of your pseudocode. Including mine. I am working to get it right. 😄
So we have a live example of abuse of old algo. And are going to update asap.
What live example?
Here's my new difficulty algorithm that 6 Monero clones are using. It is much better than karbowanec's (which is what I recommended back in November 2016). Masari has put in a pull request to Monero for it which they are considering.
The following is about how to handle bad timestamps that miners can use to lower the difficulty. Because of the following, method 3 in the algorithm is the best way to handle bad timestamps.
I've spent a lot of time finding the best way to handle bad timestamps. The best method I've found so far is to allow negative solvetimes (out of sequence timestamps) and to make
CRYPTONOTE_BLOCK_FUTURE_TIME_LIMIT = 7*T
where T=target solvetime. The default for this variable is 7200 which is bitcoin's default which is way too large when T=120 instead of T=600. Such a large future limit allows an exploit in all difficulty algorithms that can lower difficulty to (N-7200/T)/N of the initial difficulty by a large > 50% miner. For N<61, he can drive difficulty to zero. He simply assigns a large forward timestamp. But the number of blocks he can get at low difficulty is limited by how low he drives difficulty. To maximize the number of blocks he can get without difficulty rising, he can come in and assigntimestamp = (HR+P)/P*T + previous_timestamp
where P is his hashrate multiple of the network's hashrate HR before he came online. For example, if he has 2x network hashrate, he can assign 1.333xT plus previous timestamp for all his timestamps. This prevents difficulty from rising, keeping it the same value, maximizing the number of blocks he can get. With CRYPTONOTE_BLOCK_FUTURE_TIME_LIMIT=7200 and T=120 seconds, this would allow him to get 7200/120 = 60 blocks without the difficulty rising. He can't continue because he will have reached the future time limit that the nodes enforce. He then leaves and difficulty starts rising if negative solvetimes are allowed in the difficulty calculation. If negative solvetimes are not allowed (method 1 timestamp handling in my LWMA), he gets 60 blocks all the same over the course of 90 blocks that he and the rest of the network obtained. The average real solvetime for the 90 would be 1/3 of T (if he has 2x the network hashrate) without the difficulty rising. And when he leaves, difficulty will stay the same. So the algorithm will have issued blocks too quickly without penalizing other miners (except for coin inflation which also penalizes hodlers).