zawy12 / difficulty-algorithms

See the Issues for difficulty algorithms
MIT License
107 stars 25 forks source link

TSA RTT: Change Difficulty During the Block #36

Open zawy12 opened 5 years ago

zawy12 commented 5 years ago

See also Tom Harding's RTT paper.

This is a special difficulty algorithm that "tightens the Poisson" of solvetimes, keeping them closer to the target. It continually lowers the difficulty during hashing based on the timestamp the miner assigns while he's changing the nonce.

Update # 5 Here's a post I made about changing reward instead of difficulty.

This TSA type of RTT ("a fast DA inside of a slow DA") may be the only type that is safe for the following reason:

On most DAs (not LWMA for some reason) there is a >51% selfish mining attack where the attacker sets a very far-forwarded timestamp such as 2.7x the averaging window (mean lifetime) of ASERT and then sets each solvetime to zero for about 2.7*mean_lifetime*1.38 blocks. If he has 101% of the network hashrate when he begins (as opposed to being a miner before the attack with 50.5% of the hashrate) then he can submit his blocks in 2.7x amount of time, getting 38% more blocks. BTW, if it's a 50.5% attack described above in parenthesis the attacker may still get get 38% more blocks than he expected, but by leaving the public chain the public difficulty will drop which will invite more hashrate from other sources so that he will not win the chain work race. Getting back to my point, if the ASERT is made an RTT (using current timestamp instead of previous timestamp) this attack can be made much worse because the attacker's 1st block is now very easy instead of as hard as the main chain because he sets the timestamp that dictates his first difficulty.

Update Number 4 (problem with RTTs like the TSA)

This will go into detail about the primary problem with RTTs. With the RTT, there is a risk of orphans from cheaters setting the time exactly on top of the FTL, causing some nodes to reject while others accept. I thought about making the FTL shorter than the max benefit of the RTT (have an RTT that stops decreasing before it reaches the FTL), but it would quickly have the same problem to almost the same degree if the cheaters are > 50% (imagine what happens if there are 3 cheaters in row). I've argued the solution is that cheaters will avoid placing the timestamp "on top of" the FTL for fear of being orphaned. But if > 50% are not dedicated and they see a cheating block past the FTL, they will still mine on top of it because they are likely to not solve it before the FTL and it will help them win a 2-block race block if an honest miner does not mine on top of it. This appears exactly like Emin et al's <50% selfish mining attack where there's impromptu 50% collusion from honest miners merely seeking profit. The best partial solution is to make the RTT not change difficulty very much so that the timestamp cheaters do not have much to gain by risking orphans. But this problem and selfish mining are false problems because honest miners can agree to copy the attacks ("start their own attack") to make the attack lose. For example, if 33% hashrate starts an attack, then 33% of the remaining hashrate can start their own attack, leaving 33% to choose randomly. In this way no one loses or gains. CSW of all people called the honest attack "when gamma goes negative" to which Vitalik (of all people) vehemently complained and berated him with a mic at a conference. So CSW was right where Vitalik was wrong. Emin still considers it a real attack in 2021.

Update Number 3 (the best DA I can think of)

Here's the algorithm I would have all coins use, assuming it is not shown to cause problems. I have not checked to see what small percent adjustment is needed to keep more accurate time.

previousST = how long it took to solve the previous block. 

currentST = how long it is taking to solve the current block. It is the timestamp in the 
header of this block minus the timestamp of the previous block.

chainTarget = the target that goes into nBits

minersTarget = the target the miner has to solve for this block, based on nBits and the 
timestamp in the header.

N=60
chainTarget = previousTarget * (1 + previousST/T/N - 1/N)
N=11
minersTarget = chainTarget * (1 + currentST/T/N - 1/N
nextTarget = minersTarget

Solves taking > 3x600 would be about 5x more rare. There would be fewer orphans from a flatter distribution of solvetimes. The only possible problem I can see is if miners try to forward their timestamps to get lower difficulty That's limited by node validation's FTL requirement, so if everyone competes to do that fairness is retained, so the problem then shifts to the question of the consequences of having a lot of blocks falling on top of the FTL limit, potentially frequently getting temporarily rejected by say half the nodes. The miner risks higher rejection rate for a lower difficulty. This is alleviated by my settings: N=11 allows only a 0.15% drop in difficulty per 10 seconds (with T=600) and I believe you can make the FTL 10 seconds. At the start of the period, difficulty is 6% higher than baseline, giving a strong incentive for the 6x HR increases we're see to not come online to get 10% lower-than-avg difficulty the current SMA allows, because with the above they will be paying 7.5% above avg difficulty for those 1 minute solves. This has a slight incentive problem. It give a > 50% miner more incentive to selfish mine. Instead of paying that extra few percent for fast blocks, he just sets all his timestamps to T in a selfish mine, then finish his last few timestamps fast to make a little more chain work than he expects the public to get. He gets them faster than the public chain so he has to wait and go by to BTC mining. When the public chain's time gets close enough that his last timestamp is not past the FTL, he submits his chain. But he has to be > 50%. The 3% lower difficulty incentive (a 60% HR miner) is not a lot when he has already decided to not selfish mine just to get the 100% blocks instead of settling for 60%.

Here are some modelling results TSA (A slow normal EMA N=80 with a fast EMA N=11 RTT riding on top) The negative stolen metric indicates it's costly to try to pick a low difficulty to being mining with a high hashrate. The blue + symbols are the target the miner actually had to solve. The green line is the difficulty that goes on the chain.
Target ST/avgST= 595/599.93 N= 80 and M=11 attack_size: 600 start/start: 130/135 StdDev STs: 1.13 delays: 2.13% stolen: -1.23% image

The following shows how the EMA N=11 changes the difficulty depending on the solvetimes. image

Update # 2

This article got out of hand in length in complexity, so I'll summarize what a lot of coins could do to have an enormous benefit. Force timestamps to be at least 1 s after previous timestamp, make FTL=10 seconds instead of 2 hours, and revert to node local time 5 seconds instead of 70 minutes. Then apply the following to an existing difficulty algorithm:

T = target solvetime
adjustedT = 0.9*T //  this might need to be tweaked for accurate avg T
st = solvetime of current block 
st = timestamp in header  minus previous timestamp 
1/M = fraction you want difficulty harder (target lower) at st = 1 second.
1/10 = 10% harder difficulty at t = 1 s and 10% easier at st = 2*T
target *= (1+st/adustedT/M - 1/M)
For simplicity at a small cost in accuracy to chain work, the target BEFORE this adjustment is the target that goes on the chain.

If preventing chains from getting stuck is the only problem (no coin using LWMA has been stuck) then dropping difficulty exponentially after the solve is delayed will work rather than needing to read this long and complicated article. This algorithm will decreases difficulty by 10% every T=target_solvetime delay. MTP=median of time past = 11 in BTC & ZEC that needs to be changed to 1 to force tight timestamp range (MTP=1 means there will be no out-of-sequence timestamps). The FTL = future time limit is 2 hours in BTC & ZEC and must be reduce to something reasonable like 3xT like all coins that use my difficulty algorithms. This requires changing the "revert to node local time instead of median of peer time" rule from 70 minutes to FTL/2 or your difficulty can be attacked by a 33% Sybil attack on node time.

//  This is code to only stop long delays. It is not a symmetrical adjustment like 
// the full TSA that has other benefits.

// Deal with out-of sequence timestamps.
// Starting MTP+1 = 12 blocks in past, find the newest timestamp
maxTimestamp = timestamps[-MTP]; 
for ( i = -MTP+1; i <= 0 ; i++) { 
    maxTimestamp = max(timestamps[i], maxTimestamp); 
}

double solvetime = max(0, minersTimestamp - maxTimestamp);
double T = target_solvetime;
if (solvetime > 6*T+FTL) { 
    // If you don't like my floating point solution then use e^x = 1+x+x^2/2 & log rules.
     double adjust = pow(0.90,(solvetime-6*T-FTL)/T)

     // Protect against unlikely event of floating point differences
    if( ceil(adjust + 0.01) > ceil(adjust) ) { adjust = ceil(adjust + 0.03); }
     else { adjust = ceil(adjust); }

     Diff = Diff*static_cast<int64_t>(adjust); 
}

Summary The previous section was only about lowering difficulty if there are long delays. Most of the rest of this article is about making the difficulty adjustment symmetrical.

Benefits:

Drawbacks:

Short summary: Difficulty is calculated by normal algorithm. It is adjusted based on the timestamp the miner reports. The miner does not know his difficulty until he has selected a timestamp. The difficulty drops approximately linearly as the timestamp gets further from the previous block's timestamp (see chart below). The equation below symmetrically and properly "tightens the Poisson". This gives the correct avg solvetime & prevents excess blocks from being awarded in a selfish mining attack that could otherwise use timestamp manipulation to get more blocks.

TSA equation: 
Dout = Din*ST/(T+(ST-T)*e^(ST/T/M) )
    where
Din = difficulty from normal algorithm
Dout = difficulty TSA uses
ST = apparent solvetime ((this timestamp minus previous timestamp)
T = target solvetime
M > 1 makes the adjustment more aggressive  

image

Simple Method & Changing Reward Instead of Difficulty

Before getting to all the complicated details and code of doing this perfectly, I want to point out the above image shows it can be approximated simply with a linear equation. An arbitrarily-selected linear function may have undesired effects, especially if adjusting reward instead of difficulty. I am only suggesting linear functions that simulate the e^x be used. By using the e^x = 1+x approximation that is darn close for small x, the above difficulty adjustment can be:

M = 5
adjustedT = T*0.893
Dout = Din /(1 + ST/adjustedT/M - 1/M) 

Where the 0.893 was determined by modelling constant hashrate to to return the desired solve time T.

To see how this is used more fully:

M = 5;
MTP = 11; // Bitcoin's median time past setting

// Prevent negative solvetimes in a safe way.
// This does not limit timestamps as reported on chain which
// is done by normal future time limit and MTP past limit.

maxTimestamp = timestamps[-MTP]; 
for ( i = -MTP+1; i <= 0 ; i++) { 
   maxTimestamp = max(timestamps[i], maxTimestamp); 
}

k = 100*T*M; // factor to prevent integer division problem

// The following is an adjustment due to using
// and e^x = 1+x approximation in the adjustment

adjustedT = targetSolvetime*0.893;

// Adjust difficulty that miner has to solve based on the
// timestamp in his block header. Expect miners to push timestamp
// towards FTL to the point of risking their block being
// rejected for too long, after blocks with earlier timestamps
// that will allow blocks after theirs more room assign a later 
// timestamp and thereby more easily pull ahead in chain work.

Dout = (D*k)/(k + k*(minerTimestamp - maxTimestamp)/adjustedT/M - k/M);

There is theoretically no difference between increasing the reward and lowering the difficulty.

// Simple TSA method to simulate above image, linearly increasing 
// reward (instead of lowering difficulty) from 1/1.3 = 0.77 for 
// solvetime = 1 second to 2x for solvetimes > 4x target

SolvetimeRatio = (this_timestamp - previous_timestamp) / TargetSolvetime;
reward = BaseReward / max(0.5, (1.3 - 0.2*SolvetimeRatio);

But make sure out of sequence timestamps are handled correctly as shown above.

Digishield in this context compared to LWMA or SMA Zcash's Digishield is a good algorithm for KMD that can have > 50,000x hashrate attacks and needs to respond quickly, provided MTP is changed from 11 to 1 and the adjusted difficulty is the one that gets stored as the difficulty for that block on the chain. This is because it only uses the past 17 blocks in averaging whereas LWMA uses 60. Digishield ignores 75% of the effect of the 17 solvetimes to get the same level of stability in normal circumstances. But since the difficulties are being adjusted a lot Digishield will respond ~3x faster. But LWMA will be more stable, so it depends on your needs. This is the only context where Digishield is faster.

How do we handle chain work? In contradiction to the above section on Digishield, most of the work I've done kept the baseline difficulty on the chain unchanged. In other words the adjusted difficulty is not what's usually stored on the chain or subsequently used in future baseline D calculations. But what should the cumulative difficulty be if I do that? I usually assumed it would be adjusted D, the D the miner solved. But this is not the average D that the miners as a group faced during that block. My future work in this area will make the adjusted difficulty the one used in the baseline difficulty calculation and the chain work. The wide changes in D will be offset by the higher stability in solvetimes. Another option I had not appreciated in the past is that if we change reward instead of difficulty to encourage faster solvetimes (and less reward in the future, the D for the future calculations and chain work is not affected. But a lot of times the amount of mining is not strongly correlated with reward.

Live Coin Example

This is a day of TSA difficulty on a live coin, exchange. The purple is difficulty, and the lines below it are the average of 11 solvetimes and estimate of hashrate changes based on those times. No other algorithm will have such a wildly varying difficulty and the stability of the solvetimes are similarly unique. image For comparison, here is a typical day of difficulty for Zcash that has a typically-good difficulty algorithm (digishield). The red lines are high-probability instances of sudden increases in hashrate. image Here is a comparison of their average of 11 solvetimes. image image

[Xchange Code is here.] (https://github.com/Xcgtech/Wallet/blob/abf1b93486e30a4c4877400f2af574f85005a6f8/src/pow.cpp#L47) See comment below for a copy of it with most comments removes

The following chart shows the benefits compared to current methods. "NH" refers to "NiceHash-like" on-off mining that has 2x a coin's baseline hashrate (HR). The curve with a peak assumes NH-like miners will learn how this algorithm works and take advantage of it, making the solvetimes more accurate. The green line is more realistic for most coins because it assumes constant HR. Delays > 4x the target time are equally rare under either assumption, 1 per 43 days instead of 14 times a day for T=120 coins.

image

See math section below for how the blue and green plots were generated. Histograms are generated by the derivative of the CDF. For example, the Poisson CDF = 1-e^(-t/T). Derivative = 1/T*e^(-t/T). The red plot uses T=1. (BTW, for N samples and bins = n, multiple the derivative by N*n.) A very aggressive setting with M=1 shown below instead of 4 (as above) enforces precise solvetimes even with just constant hashrate, but the difficulty starts out at 20x higher and ends up 10x lower than baseline by 2xT.

image

Simulation confirmed the math (this is a less-aggressive linear motivation instead of s-curve). Target solvetime on the x-axis was 100 seconds. (5 million blocks) image

Results The following charts compares TSA (with LWMA-1 as its "baseline" algorithm) and LWMA-1 alone. LWMA-1 is the best algorithm I had before TSA for responding quickly to on-off mining. This simulation models the worst attacks I see. They come from using CN8 which attracts NH miners. Notice that TSA does not drop low enough to trigger an "attack" of extra hashrate (3x baseline) as often. The on-off in this model is triggered by the baseline LWMA-1 which is smoother than normal because of TSA, so there are fewer attacks. The solid green bars are the normalized network hashrate. The blue "+" is the actual difficulty the miners have to solve in TSA, based on their timestamp. They can't forward the timestamp because future time limit is made much closer to real time. If they delay the timestamp, D is harder to solve. Notice the oval regions which show there are not any "+" are below average D during the attacks. The attackers are having to pay above average difficulty, maybe 2x higher than the dedicated miners.

image

Nicehash miners are not going to pay 2x the average difficulty, so the above simulation is not realistic. It was based on them only looking at the baseline D (LWMA1 input to TSA). The following is if they start mining only when actual difficulty they face (TSA) drops below average of past N D's. Notice the blue line at bottom (avg 11 solvetimes) is more stable.

image

The following is LWMA1 with and without TSA with constant hashrate.

image

With this model, only 16 blocks out of 1 million were > 4x target solvetime compared to normal stable constant HR coin like BTC that would have 18,000. The histogram looks like TSA M=2 with constant motivation.

Using TSA with other DAs LWMA-1 is my preferred "baseline" DA for use with TSA, but any other algorithm can be used.

The On/Off Mining Problem Part of the reason for trying to get consistent solvetimes by adjusting difficulty (D) is to reduce on-off mining that causes subsequent long delays and losses from a higher D incurred by dedicated miners. This is the only reason for spending a lot of time trying to find a fast but smooth difficulty algorithm (DA). On-off mining is only a problem in an environment of >51% "miners" (like NiceHash, ASICs, or Multipools) jumping around to different coins. Otherwise a simple moving average (SMA) DA with N=100 to 200 would be sufficient to track price and longer-term mining trends.

Non-Difficulty Attempts to Prevent On-Off Mining Coins are trying unique POW and merge mining which potentially reduce >50% attacks which would make on-off mining protection unnecessary. Unique POW seems like an anti-Satoshi idea (almost actively seeking lower hashrate that arguably results in less security against 51% attacks) and will probably fail more and more often in their goals. Coins usually regret merge mining for complex reasons which may be solvable or maybe they should not need solving. In either case, I still see a strong need to deal with existing on-off mining, and I expect the need to increase.

Can we change reward in addition to difficulty? The idea comes up from time to time of changing reward (R) instead of or in addition to difficulty (D) in order to motivate miners to get more accurate solvetimes. Miners are only interested in the R/D ratio, the higher the better, not R or D alone. In trying to motivate them to start of stop hashing a coin, you can change either by the same percentage and get the same effect. Over the long term a constant or decreasing R per block allows for an increasing D is a lot different than trying to keep a constant D (i.e. hashrate) by varying R, but on a daily basis I can't see that it makes a difference.

Summary of TSA This is a super fast (current block) difficulty algorithm (DA) to be using in conjunction with existing DAs. The existing DA adjusts for "longer" term trends while the fast DA adjusts for the current miner. It uses only the CURRENT block's solvetime instead of past solvetimes which are used in normal DAs. The miner changes his timestamp to current time as he is hashing and this sets a large change to a baseline D. He has to solve a higher D if he solves it fast, and a lower D if the solve is late in being found. This opens up the potential for timestamp manipulation which is discussed later. In normal DA, we estimate future hashrate (HR) of miners based on PAST hashrates by avg(D)*T/avg(solvetime). In this method, by looking at the CURRENT timestamp and adjusting current D based on it, we are estimating current HR and penalizing that miner and only that miner for high hashrate, not future miners. It makes all solves closer to T which I call "tightening the Poisson". It eliminates long delays in solves.

The equation to to set the fast difficulty (Dout) is based on the standard slow-changing difficulty (Din) and the timestamp the miner sets. T = target solvetime. M=3 to 10ish, the slowness. Dout = Din*t/(T+(t-T)*e^(t/T/M) ) Here's the plot: image

The following is from spreadsheet testing with constant hashrate. The theory is solid: I did not have to make any changes to get nearly perfect average solvetime. The dots are Dout, the fast part of the DA, which is the actual D that miners have to solve. It's based on an EMA N=5. The line is Din, the slow baseline D, from an N=28 EMA.

image

Code Changes Needed Outside the TSA

  1. Miners / Pool need to change the timestamp in the template while changing the nonce, or pay a higher D than those using current timestamps. At least updating the timestamp once per 5 seconds is probably enough. CN coins do not update timestamps. This must change for TSA to be beneficial.

  2. The validator must use the block's timestamps to calculate the D target that the hash must be below.

  3. If there's an API or template request to get current D, the daemon must calculate it based on it's peer (or system) time.

  4. Reduce Future Time Limit The FTL (the max time ahead of node time a timestamp can be) needs to be reduced to maybe 1/4 target solvetime or less (30 seconds for T=120) to reduce instances of timestamp manipulators getting lower D at the expense of others. This allows M=5 and M=3 to have max timestamp manipulation of D to be 5% and 10% less. But even if FTL is set too high (like FTL=target), everyone could manipulate timestamps to forward them as much as possible (before some nodes start rejecting their blocks, decreasing chances their chain is built upon) which will increase fairness and effectively prevent the problem. A large miner can only get 1 block "in a row" at low D if he manipulates time better than others.

  5. Reduce allowable peer time difference from system time A node looks for median of peer time and adjusts its time to be equal the median if it is within a limit, otherwise it uses its own time and might throw a warning. That limit might need to be 1/5 of the FTL to reduce chain splits (6 seconds for FTL = 1/4 of T if T=120). I think nodes and miners should set system time to NTP time (e.g. pool.ntp.org) and constantly throw a warning if median of peer time is not within 5 seconds. NTP methods were supported by Theymos and Mike Hearn. NTP and GPS time could be hijacked in extreme scenarios, but I am not saying nodes have to use NTP time. Time is a knowable fact by all nodes independently without us needing to specify an oracle (NTP, UTC, GPS, Russia's GPS system, a cesium clock, or the stars if your node has the optics for it), so a consensus determination is not needed. Byzantine-tolerant systems such as BTC require clock synchronization. POW and the block chain do not define time, nodes do.

The TSA algorithm: A 1-Block DA Jacob Eliosoff's EMA gives perfect average solvetimes when using only the previous block's difficulty and timestamp to adjust. It has a perfect symmetry that is based on the Poisson distribution. This makes it ideal for adjusting the D of the current block. It is also a great choice for the baseline DA where you can use the same code with M=30 to be like LWMA N=60. The Din below comes from a normal slow DA (the baseline D). Dout is the output of this fast EMA, the adjustment to Din. Dout = Din*t/(T+(t-T)*e^(t/T/M) ) I found M=5 to be a good somewhat conservative choice, causing Dout to range 60% to 135% of Din. M=3 would be pretty aggressive. M=10 might have minor benefit. Larger M is a smaller, slower adjustment per block.

See bottom of this page for target code for BTC/Zcash clones

// TSA, Time Stamp Adjustment to Difficulty
// Copyright (c) 2018 Zawy, MIT License.
// See https://github.com/zawy12/difficulty-algorithms/issues/36
// Changes difficulty in current block based on template's timestamp.
// FTL must be lowered to 30 seconds. CN coins / pools must fix their universal timestamp problem.
// CN coins must get pool software jobs & nodes issuing templates to update template timestamp
// to current network time at least once every 10 seconds, not merely setting it to arrival time 
// of previous block. Miners outside of pools need to update it without asking for new template.

// Do not change anything below this line. Make the calling code work with this
// so that you do not have errors.

difficulty_type TSA(std::vector<uint64_t> timestamps, 
      std::vector<uint64_t> cumulative_difficulties, uint64_t T, uint64_t N, uint64_t height,  
            uint64_t FORK_HEIGHT, uint64_t  difficulty_guess, int64_t template_timestamp, int64_t M ) {

   uint64_t  L(0), next_D, i, this_timestamp(0), previous_timestamp(0), avg_D;

   assert(timestamps.size() == cumulative_difficulties.size() && timestamps.size() <= N+1 );
   // Hard code D if there are not at least N+1 BLOCKS after fork or genesis
   if (height >= FORK_HEIGHT && height <= FORK_HEIGHT + N+1) { return difficulty_guess; }
   assert(timestamps.size() == N+1); 
   previous_timestamp = timestamps[0]-T;
   for ( i = 1; i <= N; i++) {        
      // Safely handle out-of-sequence timestamps
      if ( timestamps[i]  >= previous_timestamp ) {   this_timestamp = timestamps[i];  } 
      else {  this_timestamp = previous_timestamp+1;   }
      L +=  i*std::min(6*T ,this_timestamp - previous_timestamp);
      previous_timestamp = this_timestamp; 
   }
   avg_D = ( cumulative_difficulties[N] - cumulative_difficulties[0] )/ N;

   // Prevent round off error for small D and overflow for large D.
   if (avg_D > 2000000*N*N*T) { next_D = (avg_D/(200*L))*(N*(N+1)*T*99);  }   
   else {    next_D = (avg_D*N*(N+1)*T*99)/(200*L);    }    

// LWMA is finished, now use its next_D and previous_timestamp 
// to get TSA's next_D.  I had to shift from unsigned to signed integers.
 //  assert( R > static_cast<int64_t>(1));

   int64_t ST, j, f, TSA_D = next_D, Ts = T, k = 1E3, TM = Ts*M, exk = k;
   if (template_timestamp <= static_cast<int64_t>(previous_timestamp) ) {
      template_timestamp = previous_timestamp+1;
   }
   ST = std::min(template_timestamp - static_cast<int64_t>(previous_timestamp), 6*Ts);
   for (i = 1; i <= ST/TM ; i++ ) { exk = (exk*static_cast<int64_t>(2.718*k))/k; } 
   f = ST % TM;    
   exk = (exk*(k+(f*(k+(f*(k+(f*k)/(3*TM)))/(2*TM)))/(TM)))/k;
   TSA_D = std::max(static_cast<int64_t>(10),(TSA_D*((1000*(k*ST))/(k*Ts+(ST-Ts)*exk)))/1000);
   // Make all insignificant digits zero for easy reading.
   j = 1000000000;
   while (j > 1) { 
      if ( TSA_D > j*100 ) { TSA_D = ((TSA_D+j/2)/j)*j; break; }
      else { j /= 10; }
   }
   if (     M == 1) { TSA_D = (TSA_D*85)/100; }
   else if (M == 2) { TSA_D = (TSA_D*95)/100; }
   else if (M == 3) { TSA_D = (TSA_D*99)/100; }
   return static_cast<uint64_t>(TSA_D);     
}

Choosing baseline DA to get Din Just use any decent algorithm that's relatively slow such as the following 100-block SMA. But TSA needs out-of-sequence timestamps to be handled as in the code above when it is determining the time since previous block.

// CD = cumulative difficulty, TS = timestamps, H=height
Din = ( CD[H] - CD[H-100] ) * T /  ( TS[H]-TS[H-100] );

The denominator should have a divide by zero protection like std::max(1+T/50,( TS[H]-TS[H-100] ));to prevent a 100-block selfish mine with all the same timestamps causing a divide by zero error on the block after he submits his chain.

Getting the individual solvetimes Use the method above for safely handling out of sequence timestamps. Do not do the following because a reverse time stamp can be sent, and be truncated, but the next timestamp will have a large value from the subtraction which will lower difficulty. if ( t < 0) { t=0; } // DO NOT DO THIS

CN pools I am told CN coin pool software do not allow miners to use accurate timestamps. Miners must use the previous block's arrival time which they are assigned in the template by the pool. The easiest solution is to modify the pool software to keep the correct timestamp and miners make a new request for the template every so often, but the request makes miners lose some time. So a balance needs to be between this wasted time and not getting the lowest D possible. For T=120 with M=2 and M=4, D drops about 5.5% and 2.4% every 10 seconds. Turning this into an equation, my estimate for how often to update timestamps is update_timestamp_seconds = SQRT( template_delay * T * M / 1.8) For example if it takes 0.2 seconds to get template and restart hashing and T=120 and M=5, then it's once every 8 seconds. This is trying to make the % decrease in D while waiting on an update = % hash time lost to getting the template. My estimate of % D drop in 8 seconds from this is 0.2/8 = 2.5%.

Possible Risks Tightening the Poisson increases the risk of blocks being found at the same time which will increase chain splits. Miners can set an old timestamp after a block has been found to get a higher difficulty and thereby create an alternate chain with more work, but their profit and risks seem no different than block withholding profit and risks (if < 50% hashrate). They can do the opposite and assign a forward stamp to get lower difficulty, but they risk too many nodes not accepting if they get too close to the FTL. Another block will need to be added before the node looks at it again and thereby let it pass the FTL test, so it's not definitely excluded forever. But such a cheat risks other blocks with higher D being found. I believe setting an FTL as small as possible (expecting maybe 10% of nodes to reject blocks with good timestamps) is the best course because cheaters will trying to forward the time close to FTL to get lower D which will make the next block have higher D. Miners can then fine-tune their clocks forward or reverse for maximum profit at the expense of others not fine-tuning. This is a small penalty compared to letting a huge amount of hashrate jump on and get several blocks in a row. They can't get more than 1 block at a cheaper difficulty.

Timestamp manipulating and more alt chains should be expected, but the existing chain work methods seem like they can easily provide sufficient protection.

Background on Node Peer Time I have not been able to find an argument against letting the nodes be more restrictive in the FTL they allow miners. Pre-PBFT Byzantine-tolerant systems required a synchronization clock, and PBFT can be asynchronous, so it seems theoretically acceptable that nodes can ultimately define a tight max forward time (and not just to prevent miners colluding to lower difficulty by accelerating time). In other words, the "timestamp server" function of miners is to make blocks sequential, not necessarily or even feasibly responsible for respecting real time which apparently can be left up to the nodes. Node peer time seems to be a feeble consensus mechansim, but has worked despite what a Sybil attack could do. Nodes' time could be backed by NTP or not, or use only an oracle like NTP, or just let every node independently determine time (my and kjj's preference). See this old interesting discussion between kjj, theymos, and Mike Hearn.)

Mathematics of the Histograms

The peak in the first plot above needs some explaining. I could choose other reasonable functions such as constant hashrate h=1 which is the green-line plot. A simpler linear function gave a very similar curve to this more realistic S-curve (it models more accurately the on-off mining that occurs in response to changing values of D). Units clarification: H and h in the following are hashes per second. D is hashes per block and T is seconds per block. For BTC clones difficulty = D * 2^n where n depends on the "maxTarget" or "powLimit" of the coin (n is the number of "leading zeros in the max target). D = (2^256-1)/(nBits as a target).

Update: In the following I state H = hashes/second, but H = hashes. The math below uses slight of hand to cover up that error.

image The green-line plot assumed the simpler assumption that hash rate will be constant during the block, i.e. h=1.

Historical development

The following was the previous leading text of this article, before I realized the perfect version of Andrew's idea is just another difficulty algorithm being applied to a single block.

Andrew's Sexy Long Tail (Idea) This purpose for this issue is to explore an idea by Andrew Stone. See also here. He understands there is a large amount of futility in trying to do a DA that is any better than a SMA. So he came up with this sort of bold idea that thinks and goes outside the box. In order to prevent long solvetimes, he suggests decreasing difficulty DURING the block, based on when the block is finally solved. That is, based on the timestamp selected by the miner's pre-hash template-creating software. So if there's a delay in a block solve, the D gets easier so that miners have more motivation to start mining. Alternatively, reward could be increased but is no different from a miner's point of view as explained above. Either method requires some consideration in the DA in order for the coin emission rate to stay on track.

Difficulty for the block is determine by applying an adjustment to the "initial" D of the block, given by the standard DA, if the timestamp is past some limit that is longer than we think is desirable, possibly without trying to modify the Poisson distribution, such as only making a change if the solve is taking >5x the target solvetime T ( >5xT occurs in 4% of the blocks if hashrate is constant).

Consequences & Considerations Miners can choose their timestamp up to the future time limit (FTL) and back to the median of the past 11 timestamps (MTP). To get lower difficulty in this scheme, they will try to select a time as close to the FTL as possible while trying not to have too many nodes reject their block. If they send blocks to a node with an FTL further in the future than other nodes, their solves will be sort of hidden (rejected) by the rest of the chain until that time passes. Nodes might follow a rule that if a node is sending blocks with > 2*FTL stamps, it gets banned. Since all miners might try to forward stamps, the FTL probably needs to be pretty "tight". It seems like FTL = T would work. Since miners with custom software will be able to forward timestamps, all miners should be provided with software that allows them to select timestamps, or a good default selection be used. But this later option is no different than lowering the FTL. Since a smart miner knowing the trade-offs better than others will be able to select the optimal timestamps to his advantage, the incentives provided by this should be conservative. For example since > 6xT solves are rare, if we have FTL = T, then maybe the rule should not decrease D until after 7xT, at which point we are going to let those smart miners get their gain, and other miners will not miss out much. The incentive would not be extreme because NiceHash could send D higher with a sequence of fast solves (and thereby cause subsequent long solves) while it goes to mine other coins, then come back at the right moment to get low D. This brings up the question of if the short solves should cost more D which I'll call symmetrical incentive and address it further below.

If miners lower their D by a forwarded timestamp, they risk losing the largest-chain-work battle to an honest timestamp. This may or may not offset some of the desire to forward timestamps.

Specific Long Tail Example Usually a 33% decrease in D is enough to bring in maybe 3x more hashrate (HR). Solvetimes more than 5x the target solvetime (T) occur only 4% of the time. So if I were to implement Andrew's idea, it would be to start lowering D linearly from D to maybe 0.66*D (not lower) from 5xT to 8xT. If it takes more than a 33% drop, there are bigger issues that this should not try to address any further. This is a decrease of 11% per T from 5xT to 8xT. So the difficulty algorithm would begin with a normally-calculated D received from the a standard difficulty algorithm such as simple moving average and be adjusted by the template-generating routine to

if ( t > 5*T) { 
    Dout = Din*(1 - 0.11*(t-5*T)) ; 
    Dout = std::max(0.66*Din, Dout); 
} else { Dout=Din; }

where t solvetime of the block. See section near bottom of how to carefully get this value.

What D should be used for the adjusted-D blocks in the DA's averaging? Any estimate will probably work out OK unless there is a serious problem with a lot of delays and/or the estimate is bad. The initial D is too high and the final D is too low. A simple average of the two is wrong, but it may be close enough. Treating them as two different blocks and pretending the high D was solved in the 5*T cut-off time is too high because it was not solved, but that should be OK. Maybe the high D and the 5T time it took to "not find it" can be removed from the averaging and we use just the average of the Dout which would be (Din+Dout)/2 and the solvetime would be t-5\T. But a max 30% difference in the D in a 60-block window is not going to make much difference, however it's done, as long as there are not a lot of delays. The estimate should not overshoot the next D calculation or there could be a feedback that causes more delays ... that causes more delays. Dout can be obtained from the cumulative D and Din can be obtained from back-calculating it by the solvetime.

Making it Precise & Symmetrical I mentioned to Andrew in the BCH mail list the possibility that making the motivation "symmetrical" was better and he was open to it, By this I mean higher D for faster solvetimes in addition to the lower D for long solvetimes (STs). To do this correctly it seems the change in D should be proportional to the normal probability of finding a block. By this i mean we probably do not want to change the general shape of the Possion, except to make it "tighter" i.e. more narrow in width near the target solvetime (T). We want a motivation that "follows the Poisson". To do this we have to use the Jacob Eliosoff's EMA as a basis for the adjustment.

TSA preliminary code for BTC/Zcash clones

I have not fully tested this like the CN version above but Kalkan of Xchange (XCG) did and found that the solvetime avg was low. This may have to do with averaging targets instead of difficulties in the pre-TSA (LWMA) part. Instead of LWMA, a simple moving average difficulty algorithm will probably correct it. I may or may not work to fix and test it later. The 2nd-to-last line by Kalkan is a patch that should reduce the error in whatever pre-TSA difficulty algorithm is used.

// Start with existing difficulty algorithm, then do TSA:

// LWMA difficulty algorithm
// LWMA Copyright (c) 2017-2018 The Bitcoin Gold developers (h4x4rotab)
// LWMA idea by Zawy, a modification to Tom Harding's wt-144
// Specific LWMA updated by iamstenman (MicroBitcoin)
// Recommend N=120.  Requires Future Time Limit to be set to 30 instead of 7200
// See FTL instructions here: 
// https://github.com/zawy12/difficulty-algorithms/issues/3#issuecomment-442129791

unsigned int LWMA_TSA(const CBlockIndex* pindexLast, const Consensus::Params& params,
      int64_t templateTimestamp)  {
// Begin LWMA
    const int64_t T = params.nPowTargetSpacing;
    const int64_t N = params.lwmaAveragingWindow;
    const int64_t k = N * (N + 1) * T / 2;
    const int64_t height = pindexLast->nHeight;
    const arith_uint256 powLimit = UintToArith256(params.powLimit);

   // For startup: 
   int64_t hashrateGuess = 1000; // hashes per second expected at startup
   arith_uint256 targetGuess = 115E75/(hashrateGuess*T); // 115E75 =~ pow(2,256)
   if ( targetGuess > powLimit ) { targetGuess=powLimit; } 
   if (height <= N+1) { return targetGuess.GetCompact(); } 

    arith_uint256 sumTarget, nextTarget;
    int64_t thisTimestamp, previousTimestamp;
    int64_t t = 0, j = 0;

    const CBlockIndex* blockPreviousTimestamp = pindexLast->GetAncestor(height - N);
    previousTimestamp = blockPreviousTimestamp->GetBlockTime();

    // Loop through N most recent blocks. 
    for (int64_t i = height - N + 1; i <= height; i++) {
        const CBlockIndex* block = pindexLast->GetAncestor(i);
        // Out-of-sequence timestamp protection idea inherited from kyuupichan
        thisTimestamp = (block->GetBlockTime() > previousTimestamp) ? block->GetBlockTime() : previousTimestamp + 1;

        int64_t solvetime = std::min(6 * T, thisTimestamp - previousTimestamp);
        previousTimestamp = thisTimestamp;

        j++;
        t += solvetime * j; // Weighted solvetime sum.
        arith_uint256 target;
        target.SetCompact(block->nBits);
        sumTarget += target / (k * N);
   }
    nextTarget = t * sumTarget;

// The above completes the BTG/MicroBitcoin/Zawy code except last 3 lines were removed.
// Now begin TSA.

// TSA Copyright (c) 2018 Zawy, MIT License 
// First implemented w/ modifications on live coin by Kalkan of Xchange

   // R is the "softness" of the per-block TSA adjustment to the DA. R<6 is aggressive.
   int64_t R = 4, m = 1E6 ST, i, f, exm = m;  

   if (templateTimestamp <= previousTimestamp) { templateTimestamp = previousTimestamp+1;}
   ST = std::min(templateTimestamp - previousTimestamp, 6*T);  

   // It might be good to turn this e^x equation into a look-up table;
   for ( i = 1; i <= ST/(T*R) ; i++ ) {   exm = (exm*static_cast<int64_t>(2.718*m))/m;  }
   f = ST % (T*R);
   exm = (exm*(m+(f*(m+(f*(m+(f*(m+(f*m)/(4*T*R)))/(3*T*R)))/(2*T*R)))/(T*R)))/m;

   arith_uint256  TSATarget;
   // 1000 below is to prevent overflow on testnet
   TSATarget = (nextTarget*((1000*(m*T+(ST-T)*exm))/(m*ST)))/1000; 

   // Next line is by kalkan of Xchange to fix low-solvetime problem. May cause overflow on testnet
   // A better pre-TSA algo than LWMA above will prevent the need for this.
   TSATarget *= (T*k)/t; 

   if (TSATarget > powLimit) { TSATarget = powLimit; }
  return TSATarget.GetCompact();
miketout commented 5 years ago

Very interesting. I'm going to explore a way to integrate this with Verus Proof of Power. Thanks!

semiformal commented 5 years ago

On the prelim BTC/zCash code. This line seems off? if ((previousTarget * 67) / 100 > nextTarget) { nextTarget = (previousTarget * 67); } It seems like here the goal is to cap the decrease of nextTarget to 66% of the previousTarget. However, the code is actually setting nextTarget to 67 * previousTarget. Is this really intentional, this seems to cause the target to get stuck at powLimit.

zawy12 commented 5 years ago

Thanks....I don't know how that error got in there. It's missing a divide by 100 at the end and reversing the first part makes it easier to understand. I wonder if a certain coin has this in their code.

if ( nextTarget < (previousTarget 67) / 100 ) { nextTarget = (previousTarget 67)/100; }

semiformal commented 5 years ago

Glad to help.

I agree with the change you made around reversing, that's actually what I did first in my own code to fix this, it makes it much easier to understand. Another option (a long as these operations work for arith_unit256) is to use min/max so that this bug can't happen:

    // Clamp nextTarget between 66% and 150% of previousTarget
    const arith_uint256 prevTarget150pct = (previousTarget * 150) / 100;
    const arith_uint256 prevTarget66pct = (previousTarget * 67) / 100;
    nextTarget = std::max(std::min(nextTarget, prevTarget150pct), prevTarget66pct);
semiformal commented 5 years ago

Oh, yes, also. The coins I checked do have this bug :( I checked a few to see if maybe this was just a bug in the post.

zawy12 commented 5 years ago

Both of those lines were deleted because they caused other problems that they did not cause in LWMA

zawy12 commented 5 years ago

No limits should be used with TSA. If it varies too much according to your desires, then the thing to do is increase the R value

semiformal commented 5 years ago

Got it, so those lines should be used in LWMA when not using TSA. However, if you are using TSA it will handle effectively clamping the difficulty adjust (or at least swinging too far on a difficult adjust does not cause as much trouble with TSA).

zawy12 commented 5 years ago

The average solvetime will probably not be correct if you use the clamp. The TSA formula without the clamp is the only one that will keep the correct solvetime when changing difficulty so drastically each block.

zawy12 commented 5 years ago

Those lines are only needed in certain LWMAs and I'm removing them from the current one. I'm only aware of 1 coin that has this error. Luckily it is never triggered, otherwise the difficulty will probably go very very low and maybe stay there, so it has never been triggered if they have not noticed it So they might be able to fix the affected coin by a hot-fix since the fix would be backwards compatible.

semiformal commented 5 years ago

Yeah, it seems like it would very difficult for this condition to occur. I only hit it during development on a brand new chain, and yes if the condition has never occurred in the past a hot fix is viable.

mochimodev commented 5 years ago

Monitoring.

b-g-goodell commented 5 years ago

I'm confused @zawy12. If solve times are under a modal distribution, they are LESS like a Poisson process. Significantly less like one.

A goodness-of-fit for your data to show that the result is more like a Poisson process is simple: test whether the expected solvetime equals the variance of the solvetime. If you have an extremely tight, unimodal distribution with mode at 100 seconds (or whatever) your variance will be very small compared to the expectation.

What you have is not a Poisson process. It's variance is too large, and your process is not memoryless. Memorylessness is essential to proof of work, and moreover, there is only one continuous distribution that is memoryless: the exponential distribution. If your difficulty adjustment algorithm is actively shaping the inter-block arrival time/solve time distribution to be anything other than exponential, then you are creating a non-memoryless block arrival time.

This allows miners to abandon a block the moment that it's taking longer to solve than median/mode/expectation, which will drive overall block arrival time down... which will drive your difficulty up... despite that hashrate has not changed.

So, maybe I'm confused, or maybe I need an explanation of the goal of your difficulty adjustment method.

zawy12 commented 5 years ago

It's variance is too large

Maybe you have this mis-perception because the charts show a wide variance in difficulty that drives solvetime variance low. I call it "tightening" the Poisson because the solvetime variance is less than the expected value. The solvetime variances for M=2 and M=4 are 0.43 and 0.63 for "dedicated miner" ("constant HR") assumption, and 0.29 and 0.38 under a simple "max-profit" model where the miner can decide to start or stop mining in easily in the current block (and supposedly has other coins to mine during his off time).

This allows miners to abandon a block the moment that it's taking longer to solve

Difficulty gets lower and lower as the solvetime gets longer. They will want to stay.

is not memoryless

I do not know why POW requires memorylessness, but 'll give two observational arguments that it is not a problem, and a 3rd argument that it really is memoryless.

1) The aggregate miner motivation changes all the time in small coins. This changes lambda in the Poisson, sometimes throwing solvetimes off, making them too fast or slow. The DA sees this and eventually corrects. TSA reverses the process. It changes lambda to affect miner motivation.

2) I've done a lot of modelling with various miner motivation assumptions and can accurately predict results of DAs on live coins. The results above should be close to live coin results. I don't know how miners will respond to this nearly as well as other DAs, but it reacts correctly under all the things I've given it, so that there should not be any problem no matter what miners do.

3) Lambda is the only time-dependent variable in the Poisson. TSA converts it to a distinct function of time. In other words, it warps time. Living in the real-world, we think it is not a Poisson. But in the time-warped world it still is. It warps miner-motivation, which warps lambda.

b-g-goodell commented 5 years ago

Why must POW be memoryless? Because it must be progress-free. Why must it be progress-free? Satoshi made this point at least once here but didn't elaborate.

Imagine if proof of work had progress (like a progress bar when you are installing a video game) or more specifically had a memory. If POW is using a progress-based approach, then the fastest machine always gets the next block with very high probability. This changes the threat model from a 50+% attack to whoever has the single fastest machine. Not decentralized.

Inefficient miners can't even probabilistically get lucky. The "more reliable" the memory of a system, the closer you are to a progress bar, and the less that small miners can get probabilistically lucky, i.e. less decentralized. The closer the system is to memoryless, the more likely that a small miner or pool can at least get a few blocks at least occasionally by happenstance.

So the next question is: how do your simulations take all the above into account, or how can they be modified so that they do?

mochimodev commented 5 years ago

The blockchain is a time-invariant system. There is no time from Block A to Block B; only there is the state transition from one block to the next. The implication of "progress" and "memory" is not meaningful in the present context. That a block sequence number is higher, and that the reported solve time is higher than the preceding block sequence number is also a yes/no state with no additional reliable information available. We rely on the emergent behavior of a consensus network to determine whether the solve time is "sane" or "insane", as nodes accept or reject blocks according to what they believe the current time is. The only valid sanity tests related to "time" for each node are: a) Is the block reported solve time After the preceding block's reported solve time? b) Is the block reported solved time Before the current time as I understand it? There's no good reason to accept a block that has a timestamp that is in the future.

zawy12 commented 5 years ago

The relative probability of finding a block remains the same. If you're 1% of network HR, then you have 1% chance of finding the block for any given timestamp. The difficulty changes with time, but it's not because of how many hashes have been done.

I have no concerns about it other than the things I mention in the issue. I consider it finished, except I need to find a better baseline DA for the BTC/Zcash version (target method has more solvetime error than difficulty method).

zawy12 commented 5 years ago

Xchange successfully implemented a TSA version close to the BTC version above and it functions as well as expected. No solvetimes are more than 3.4x target.

image

image

image

cryptozeny commented 5 years ago

Congratulations! FYI, Zcash is a 17 block average, so the comparison may be different... (??)

cryptforall commented 5 years ago

The Extra addition that was made to Xchange.

if ( (previousTimestamp - badblockcheck->GetBlockTime()) <= T/R && ST<(T-(T/5)) ) { TSATarget = nextTarget*(1/5); } else if ( ST <= T/5 ) { TSATarget = nextTarget*(1/5); }

To clarify why I did this,

else if ( ST <= T/5 ) { TSATarget = nextTarget*(1/5); } Hash attack programs attack the difficulty 0-1 secs, so when it solves the difficulty before it reaches a T/5, the

if ( (previousTimestamp - badblockcheck->GetBlockTime()) <= T/R && ST<(T-(T/5)) ) { TSATarget = nextTarget*(1/5); }

checks the previous block preventing it from breaking the chain and allowing a high hash attack. So the next target is decreased 1/5 X and if that is solved it re-checks and decreases it again. This breaks the loop of the attack. Ultimately breaking the attack. This is why it is anti-51%. The on and off is handled by the TSA.

I did this studying attacks by using the program used in high hash attack. I placed a bounty on proof on 51% attack for XCG, yet no-one has been successful.

zawy12 commented 4 years ago

This is the TSA we used on Xchange

// Xbuffer Xchange (c) 2018 Kalcan
// TSA Copyright (c) 2018 Zawy, Kalcan (XCG)
// LWMA Copyright (c) 2017-2018 The Bitcoin Gold developers
// ############# Begin LWMA
const int64_t T = consensusParams.nPowTargetSpacing;
const int64_t N = consensusParams.nZawyLwmaAveragingWindow;
const int64_t k = N * (N + 1) * T / 2;
const int64_t height = pindexPrev->nHeight;
const arith_uint256 powLimit = UintToArith256(consensusParams.powLimit);
// For startup:
int64_t hashrateGuess = 1000; // hashes per second expected at startup
arith_uint256 targetGuess = 115E75/(hashrateGuess*T); // 115E75 =~ pow(2,256)

if ( targetGuess > powLimit ) { targetGuess=powLimit; }
if ( height <= N+1 ) { return targetGuess.GetCompact(); }

arith_uint256 sumTarget, previousTarget, nextTarget;
int64_t thisTimestamp, previousTimestamp;
int64_t t = 0, j = 0, SumWAvgtime=0, SS=0;   // Remove SS if following only LWMA and TSA

const CBlockIndex* blockPreviousTimestamp = pindexPrev->GetAncestor(height - N);
previousTimestamp = blockPreviousTimestamp->GetBlockTime();

// Loop through N most recent blocks. // LWMA previous target-> bits and Timestamps are the focys
for (int64_t i = height - N + 1; i <= height; i++) {
  const CBlockIndex* block = pindexPrev->GetAncestor(i);
  thisTimestamp = (block->GetBlockTime() > previousTimestamp) ? block->GetBlockTime() : previousTimestamp + 1;
  int64_t solvetime = std::min(6 * T, thisTimestamp - previousTimestamp);
  previousTimestamp = thisTimestamp;
  SS+=solvetime;
  j++;
  t += solvetime * j; // Weighted solvetime sum.
  arith_uint256 target;
  target.SetCompact(block->nBits);
  sumTarget += target / (k * N);
}
nextTarget = t * sumTarget;

// The above completes the BTG/MicroBitcoin/Zawy code except last 3 lines were removed

// ##############    Begin TSA Declarations
// R is the "softness" of the per-block TSA adjustment to the DA. R<6 is aggressive.
int64_t R = 2;  assert(R > 1);
// "m" is a factor to help get e^x from integer math. 1E5 was not as precise
int64_t m = 1E6;
arith_uint256  TSATarget;
int64_t exm = m;  // This will become m*e^x. Initial value is m*e^(mod(<1)) = m.
int64_t SC, ST, i, f; //remove SC for TSA & LWMA
int64_t templateTimestamp = pblock->nTime;
previousTimestamp = pindexPrev->GetBlockTime();
const CBlockIndex* badblockcheck = pindexPrev->GetAncestor(height-1); //Only for Xbuffer

ST = std::min(templateTimestamp - previousTimestamp, 6*T);

// #########  Begin Unwanted Modification to TSA logic
//----------Xbuffer------------------------------
SC=ST; 
if ( SS/N + 1 <= T/R ) { SS = ( SS/(N + 1)/T )*SS; }
ST = ( ST*( ( SS/(N + 1)*1000 )/T ) ) / 1000;
if ( SC > T/R && ST == 0 ) { ST = SC; }
if ( ST < 0 ) { ST = 0; }
if ( (previousTimestamp - badblockcheck->GetBlockTime()) <= T/R && ST<(T-(T/5)) ) { TSATarget = nextTarget*(1/5); }
else if ( ST <= T/5 ) { TSATarget = nextTarget*(1/5);}

// ########### Begin Actual TSA   ##########
else {
  // It would be good to turn the for statement into a look-up table;
  for ( i = 1; i <= ST/T/R ; i++ ) { exm = (exm*static_cast<int64_t>(2.71828*m))/m; }
  f = ST % (T*R);
  exm = (exm*(m+(f*(m+(f*(m+(f*(m+(f*m)/(4*T*R)))/(3*T*R)))/(2*T*R)))/(T*R)))/m;
  // 1000 below is to prevent overflow on testnet
  TSATarget = (nextTarget*((1000*(m*T + (ST - T)*exm))/(m*ST)))/1000;
}
if (TSATarget > powLimit) { TSATarget = powLimit; }
return TSATarget.GetCompact();
}