zawy12 / difficulty-algorithms

See the Issues for difficulty algorithms
MIT License
108 stars 25 forks source link

Qustion about LWMA is version 3 is stil actual ? #79

Closed someunknownman closed 5 months ago

someunknownman commented 7 months ago

Qustion about LWMA is version 3 is stil actual or maybe better use some else algo ?

At planed project block time is 5M or 300S and retarget every block.

cryptforall commented 7 months ago

Re-targeting is per block can be done in chainparams.cpp and time does not matter. The algorithm is a method to mitigate hash rate spikes deviation and anomalies that would cause exploits.

someunknownman commented 7 months ago

if use some other than BTC native algo "retarget setting" - nPowTargetTimespan do notting.

and qustion was more about LWMA is latest version fixed and there was no new bugs ? How i undestand good settings for 5m is N 576

zawy12 commented 7 months ago

N above 300 will be good and there isn't a better algorithm, but there are several as good. I would not go higher than 600 for any block time because it's accurate enough. I like the following algorithm ("WTEMA)" better because it's simpler. It's a lot faster to calculate than LWMA, but after a million blocks LWMA with N=576 probably adds less than 30 seconds of validation.

K = Kp + Kp*t/T/N - Kp/N

K= current target Kp = prior target t = solvetime of Kp block (it's timestamp minus the timestamp before it) T = block time N = 100 to 300, half of LWMA to get the same response speed.

But for coins using the Monero codebase this can't be used because there can't be a 1-block delay in Kp or the solvetime which is the timestamp of Kp minus the timestamp of the block before it. I can't remember, but there was something about the last timestamp or difficulty not being easily available to the difficulty algorithm.

cryptforall commented 7 months ago

if use some other than BTC native algo "retarget setting" - nPowTargetTimespan do notting.

and qustion was more about LWMA is latest version fixed and there was no new bugs ? How i undestand good settings for 5m is N 576

if use some other than BTC native algo "retarget setting" - nPowTargetTimespan do notting.

and qustion was more about LWMA is latest version fixed and there was no new bugs ? How i undestand good settings for 5m is N 576

Chainprams.cpp consensus.npowtargettimespan. Also make sure the no re targeting is false. The code snippet is only for pow.cpp. You will need follow each request of a variable or a constant from the source code file.

someunknownman commented 7 months ago

N above 300 will be good and there isn't a better algorithm, but there are several as good. I would not go higher than 600 for any block time because it's accurate enough. I like the following algorithm ("WTEMA)" better because it's simpler. It's a lot faster to calculate than LWMA, but after a million blocks LWMA with N=576 probably adds less than 30 seconds of validation.

K = Kp + Kp*t/T/N - Kp/N

K= current target Kp = prior target t = solvetime of Kp block (it's timestamp minus the timestamp before it) T = block time N = 100 to 300, half of LWMA to get the same response speed.

But for coins using the Monero codebase this can't be used because there can't be a 1-block delay in Kp or the solvetime which is the timestamp of Kp minus the timestamp of the block before it. I can't remember, but there was something about the last timestamp or difficulty not being easily available to the difficulty algorithm.

it's based on BTC, the question was more about LWMA if it's still relevant since I'm a little lost on the LWMA vulnerability dates

As for WTEMA, I don’t quite understand how to implement this because I can’t understand the essence and haven’t found examples like with lwma, I can implement it incorrectly and it will turn out to be vulnerable devilry if it's simpler than LWMA, it's better as later i will need implement it in Python, Go, js, etc. The simpler the solution, the better.

It is also important that the algorithm does not break on 32-bit and exotic architectures. I don’t remember exactly, but somewhere in some algorithms there were problems with mathematics at RISK or ARM systems.

if use some other than BTC native algo "retarget setting" - nPowTargetTimespan do notting. and qustion was more about LWMA is latest version fixed and there was no new bugs ? How i undestand good settings for 5m is N 576

if use some other than BTC native algo "retarget setting" - nPowTargetTimespan do notting. and qustion was more about LWMA is latest version fixed and there was no new bugs ? How i undestand good settings for 5m is N 576

Chainprams.cpp consensus.npowtargettimespan. Also make sure the no re targeting is false. The code snippet is only for pow.cpp. You will need follow each request of a variable or a constant from the source code file.

yes, I understand what parameters are responsible for what , the question was more about the relevance of the algorithms and the recommended parameters for a 5-minute block, it is clear that it is not entirely correct to say the "difficulty recalculation time" since it is based on the past "N" blocks

And of course, all unused parameters are commented out in the same way as exceptions for non-existent blocks in the new blockchain, for example BIP 30

    // We don have that blocks so skip BIP30 tx's
    /*
    bool fEnforceBIP30 = !((pindex->nHeight==91722 && pindex->GetBlockHash() == uint256S("0x00000000000271a2dc26e7667f8419f2e15416dc6955e5a6c6cdf3f2574dd08e")) ||
                           (pindex->nHeight==91812 && pindex->GetBlockHash() == uint256S("0x00000000000af0aed4792b1acee3d966af36cf5def14935db8de83d6f9306f2f")));
    */

    bool fEnforceBIP30 = true;

    // We don have that blocks so skip BIP30 tx's

    /*
    if (fEnforceBIP30 || pindex->nHeight >= BIP34_IMPLIES_BIP30_LIMIT) {
    */

    if (fEnforceBIP30) {
        for (const auto& tx : block.vtx) {
            for (size_t o = 0; o < tx->vout.size(); o++) {
                if (view.HaveCoin(COutPoint(tx->GetHash(), o))) {
// We don have that blocks so skip BIP30

/*bool IsBIP30Repeat(const CBlockIndex& block_index)
{
    return (block_index.nHeight==91842 && block_index.GetBlockHash() == uint256S("0x00000000000a4d0a398161ffc163c503763b1f4360639393e0e4c8e300e0caec")) ||
           (block_index.nHeight==91880 && block_index.GetBlockHash() == uint256S("0x00000000000743f190a18c5577a3c2d2a1f610ae9601ac046a38084ccb7cd721"));
@@ -5901,7 +5923,7 @@ bool IsBIP30Unspendable(const CBlockIndex& block_index)
{
    return (block_index.nHeight==91722 && block_index.GetBlockHash() == uint256S("0x00000000000271a2dc26e7667f8419f2e15416dc6955e5a6c6cdf3f2574dd08e")) ||
           (block_index.nHeight==91812 && block_index.GetBlockHash() == uint256S("0x00000000000af0aed4792b1acee3d966af36cf5def14935db8de83d6f9306f2f"));
}
}*/
cryptforall commented 7 months ago

N above 300 will be good and there isn't a better algorithm, but there are several as good. I would not go higher than 600 for any block time because it's accurate enough. I like the following algorithm ("WTEMA)" better because it's simpler. It's a lot faster to calculate than LWMA, but after a million blocks LWMA with N=576 probably adds less than 30 seconds of validation. K = Kp + Kp*t/T/N - Kp/N K= current target Kp = prior target t = solvetime of Kp block (it's timestamp minus the timestamp before it) T = block time N = 100 to 300, half of LWMA to get the same response speed. But for coins using the Monero codebase this can't be used because there can't be a 1-block delay in Kp or the solvetime which is the timestamp of Kp minus the timestamp of the block before it. I can't remember, but there was something about the last timestamp or difficulty not being easily available to the difficulty algorithm.

it's based on BTC, the question was more about LWMA if it's still relevant since I'm a little lost on the LWMA vulnerability dates

As for WTEMA, I don’t quite understand how to implement this because I can’t understand the essence and haven’t found examples like with lwma, I can implement it incorrectly and it will turn out to be vulnerable devilry if it's simpler than LWMA, it's better as later i will need implement it in Python, Go, js, etc. The simpler the solution, the better.

It is also important that the algorithm does not break on 32-bit and exotic architectures. I don’t remember exactly, but somewhere in some algorithms there were problems with mathematics at RISK or ARM systems.

if use some other than BTC native algo "retarget setting" - nPowTargetTimespan do notting. and qustion was more about LWMA is latest version fixed and there was no new bugs ? How i undestand good settings for 5m is N 576

if use some other than BTC native algo "retarget setting" - nPowTargetTimespan do notting. and qustion was more about LWMA is latest version fixed and there was no new bugs ? How i undestand good settings for 5m is N 576

Chainprams.cpp consensus.npowtargettimespan. Also make sure the no re targeting is false. The code snippet is only for pow.cpp. You will need follow each request of a variable or a constant from the source code file.

yes, I understand what parameters are responsible for what , the question was more about the relevance of the algorithms and the recommended parameters for a 5-minute block, it is clear that it is not entirely correct to say the "difficulty recalculation time" since it is based on the past "N" blocks

And of course, all unused parameters are commented out in the same way as exceptions for non-existent blocks in the new blockchain, for example BIP 30

    // We don have that blocks so skip BIP30 tx's
    /*
    bool fEnforceBIP30 = !((pindex->nHeight==91722 && pindex->GetBlockHash() == uint256S("0x00000000000271a2dc26e7667f8419f2e15416dc6955e5a6c6cdf3f2574dd08e")) ||
                           (pindex->nHeight==91812 && pindex->GetBlockHash() == uint256S("0x00000000000af0aed4792b1acee3d966af36cf5def14935db8de83d6f9306f2f")));
    */

    bool fEnforceBIP30 = true;

    // We don have that blocks so skip BIP30 tx's

    /*
    if (fEnforceBIP30 || pindex->nHeight >= BIP34_IMPLIES_BIP30_LIMIT) {
    */

    if (fEnforceBIP30) {
        for (const auto& tx : block.vtx) {
            for (size_t o = 0; o < tx->vout.size(); o++) {
                if (view.HaveCoin(COutPoint(tx->GetHash(), o))) {
// We don have that blocks so skip BIP30

/*bool IsBIP30Repeat(const CBlockIndex& block_index)
{
    return (block_index.nHeight==91842 && block_index.GetBlockHash() == uint256S("0x00000000000a4d0a398161ffc163c503763b1f4360639393e0e4c8e300e0caec")) ||
           (block_index.nHeight==91880 && block_index.GetBlockHash() == uint256S("0x00000000000743f190a18c5577a3c2d2a1f610ae9601ac046a38084ccb7cd721"));
@@ -5901,7 +5923,7 @@ bool IsBIP30Unspendable(const CBlockIndex& block_index)
{
    return (block_index.nHeight==91722 && block_index.GetBlockHash() == uint256S("0x00000000000271a2dc26e7667f8419f2e15416dc6955e5a6c6cdf3f2574dd08e")) ||
           (block_index.nHeight==91812 && block_index.GetBlockHash() == uint256S("0x00000000000af0aed4792b1acee3d966af36cf5def14935db8de83d6f9306f2f"));
}
}*/

To be honest I played with testnet and N, FTL

N above 300 will be good and there isn't a better algorithm, but there are several as good. I would not go higher than 600 for any block time because it's accurate enough. I like the following algorithm ("WTEMA)" better because it's simpler. It's a lot faster to calculate than LWMA, but after a million blocks LWMA with N=576 probably adds less than 30 seconds of validation. K = Kp + Kp*t/T/N - Kp/N K= current target Kp = prior target t = solvetime of Kp block (it's timestamp minus the timestamp before it) T = block time N = 100 to 300, half of LWMA to get the same response speed. But for coins using the Monero codebase this can't be used because there can't be a 1-block delay in Kp or the solvetime which is the timestamp of Kp minus the timestamp of the block before it. I can't remember, but there was something about the last timestamp or difficulty not being easily available to the difficulty algorithm.

it's based on BTC, the question was more about LWMA if it's still relevant since I'm a little lost on the LWMA vulnerability dates

As for WTEMA, I don’t quite understand how to implement this because I can’t understand the essence and haven’t found examples like with lwma, I can implement it incorrectly and it will turn out to be vulnerable devilry if it's simpler than LWMA, it's better as later i will need implement it in Python, Go, js, etc. The simpler the solution, the better.

It is also important that the algorithm does not break on 32-bit and exotic architectures. I don’t remember exactly, but somewhere in some algorithms there were problems with mathematics at RISK or ARM systems.

if use some other than BTC native algo "retarget setting" - nPowTargetTimespan do notting. and qustion was more about LWMA is latest version fixed and there was no new bugs ? How i undestand good settings for 5m is N 576

if use some other than BTC native algo "retarget setting" - nPowTargetTimespan do notting. and qustion was more about LWMA is latest version fixed and there was no new bugs ? How i undestand good settings for 5m is N 576

Chainprams.cpp consensus.npowtargettimespan. Also make sure the no re targeting is false. The code snippet is only for pow.cpp. You will need follow each request of a variable or a constant from the source code file.

yes, I understand what parameters are responsible for what , the question was more about the relevance of the algorithms and the recommended parameters for a 5-minute block, it is clear that it is not entirely correct to say the "difficulty recalculation time" since it is based on the past "N" blocks

And of course, all unused parameters are commented out in the same way as exceptions for non-existent blocks in the new blockchain, for example BIP 30

    // We don have that blocks so skip BIP30 tx's
    /*
    bool fEnforceBIP30 = !((pindex->nHeight==91722 && pindex->GetBlockHash() == uint256S("0x00000000000271a2dc26e7667f8419f2e15416dc6955e5a6c6cdf3f2574dd08e")) ||
                           (pindex->nHeight==91812 && pindex->GetBlockHash() == uint256S("0x00000000000af0aed4792b1acee3d966af36cf5def14935db8de83d6f9306f2f")));
    */

    bool fEnforceBIP30 = true;

    // We don have that blocks so skip BIP30 tx's

    /*
    if (fEnforceBIP30 || pindex->nHeight >= BIP34_IMPLIES_BIP30_LIMIT) {
    */

    if (fEnforceBIP30) {
        for (const auto& tx : block.vtx) {
            for (size_t o = 0; o < tx->vout.size(); o++) {
                if (view.HaveCoin(COutPoint(tx->GetHash(), o))) {
// We don have that blocks so skip BIP30

/*bool IsBIP30Repeat(const CBlockIndex& block_index)
{
    return (block_index.nHeight==91842 && block_index.GetBlockHash() == uint256S("0x00000000000a4d0a398161ffc163c503763b1f4360639393e0e4c8e300e0caec")) ||
           (block_index.nHeight==91880 && block_index.GetBlockHash() == uint256S("0x00000000000743f190a18c5577a3c2d2a1f610ae9601ac046a38084ccb7cd721"));
@@ -5901,7 +5923,7 @@ bool IsBIP30Unspendable(const CBlockIndex& block_index)
{
    return (block_index.nHeight==91722 && block_index.GetBlockHash() == uint256S("0x00000000000271a2dc26e7667f8419f2e15416dc6955e5a6c6cdf3f2574dd08e")) ||
           (block_index.nHeight==91812 && block_index.GetBlockHash() == uint256S("0x00000000000af0aed4792b1acee3d966af36cf5def14935db8de83d6f9306f2f"));
}
}*/

Ignore xbuffertsa

  1. GetNextWorkRequired : This function takes in the last block index ( pindexLast ), the block header ( pblock ), and the consensus parameters ( params ). It first checks if the pindexLast is null, and if so, throws an exception. Then, it gets the height of the last block and increments it by 1. If the height is greater than or equal to a specific height value ( params.XCGZawyLWMAHeight ), it calls the LwmaGetNextWorkRequired function with the same parameters. Otherwise, it calls the GetNextWorkRequiredBTC function.
  2. LwmaGetNextWorkRequired : This function is called when the block height is greater than or equal to params.XCGZawyLWMAHeight . It first checks if the pindexLast is null, and if so, throws an exception. Then, it checks if a specific parameter ( params.XbufferTSAH ) is less than the height of the last block. If true, it calls the XbufferTSA function with the block header, consensus parameters, and the previous block index. Otherwise, it calls the LwmaCalculateNextWorkRequired function with the last block index and consensus parameters.
  3. XbufferTSA : This function calculates the next target difficulty for a block using the XbufferTSA algorithm. It first initializes some variables and constants. Then, it calculates an initial target guess based on a hashrate guess and the target spacing. If the target guess is greater than the maximum allowed target ( powLimit ), it sets the target guess to the maximum target. Next, it checks if the height is less than or equal to the averaging window size plus 1. If true, it returns the target guess as the compact representation. Otherwise, it performs a series of calculations to determine the next target based on the previous blocks' timestamps and targets. Finally, it checks if the block timestamp is before a specific time and adjusts the target if necessary. The final target is returned as the compact representation.
  4. LwmaCalculateNextWorkRequired : This function is called when the block height is less than params.XCGZawyLWMAHeight and is used to calculate the next target difficulty using the Lwma algorithm. If the fPowNoRetargeting parameter is true, it simply returns the current block's bits. Otherwise, it initializes some variables and constants. Then, it iterates over the previous blocks within the averaging window and calculates a sum of targets and a weighted average time. After the loop, it calculates the next target based on the sum of targets and the weighted average time. Finally, it returns the next target as the compact representation.
  5. GetNextWorkRequiredBTC : This function is called when the block height is not greater than or equal to params.XCGZawyLWMAHeight and is used to calculate the next target difficulty using the Bitcoin algorithm. It first initializes the proof-of-work limit ( nProofOfWorkLimit ) based on the consensus parameters. Then, it checks if the pindexLast is null and if so, returns the proof-of-work limit. Next, it checks if the block height plus 1 is not divisible by the difficulty adjustment interval. If true, it returns the current block's bits. Otherwise, it calculates the first block index within the difficulty adjustment interval and calls the CalculateNextWorkRequired function with the last block index, the first block's timestamp, and the consensus parameters. The result is returned as the compact representation.
  6. Use N, ftl as suggested and do hashrate charts to tweak. Adjust for v3

Fyi the reason i am doing this: if you understand the logic then you can even do it live on a blockchain at a set block. It's a good benefit

cryptforall commented 7 months ago

Some additional information for KT First true 51 % attack prevention • Dependent on Time, use solve time, ST. • 70 sec Min and 180 sec Max block times • CPU mining profitable in wallet • Rejects pools with on and off mining • Short-term NH and MRR are considered inefficient • No delays with high hash drop-off in blocks • long term miners benefit the most • requires an update to explorers • The difficulty (D) is not a dependent factor when ST moves from the expected block time and adjustments are made • D is set as a high and is unlikely solvable for 70 sec, then 70 sec of fair zone mining • if the buffer overloads and breaks before 70 secs, the next block D is greatly increased for 70 secs. • at 150 secs to the Max block time the block is more dependent of latency and process speed. • Assets on the chain can survive a sudden blackout with only 2 peers remaining and have no block ST delays • block time reliability and no hash drop delays in ST. • Getdifficulty is the actual D that was needed to solve the block at an exact time point. • Network Hash(NHPS) is now the actual average hash/s needed to solve the block. • NHPS is an indicator as to what system you fit in, solo or pool mining. • Miners' Hash rates higher than NHPS are able to solo mine and be successful • A hash rate below the average would do better in a pool or solo expecting, free fall adjustment blocks in increased hash rate • increased distribution which prevents market manipulation • the ability to change D per process cycle allows for miners to enter the competition, with lower D as ST reaches 150 sec • ST reaching 150 to 180 secs, the D will decrease until the lowest limit • honest and longer connected miners will benefit from the free fall block system • if the block reaches 180 sec, the D reverts back to the lowest and enables CPU mining. If no one can solve it, Start again. • On-and-off mining attempts will hurt the user and reward the rest of the miners with low D, as the block pulls ST towards the expected 150 secs. they were essentially allowing everyone on the network a chance at a block.

dgenr8 commented 7 months ago

The mimblewimble implementation of WTEMA is quite simply expressed by their actual code -- consensus.rs line 389-390:

let next_diff = last_diff *
                WTEMA_HALF_LIFE / (WTEMA_HALF_LIFE - BLOCK_TIME_SEC + last_block_time);

WTEMA_HALF_LIFE is a constant with time units which makes the expression easiest to understand. It's not exactly a half-life numerically, but a "characteristic decay time".

mimblewimble has a 5-minute future time limit.

someunknownman commented 7 months ago

N above 300 will be good and there isn't a better algorithm, but there are several as good. I would not go higher than 600 for any block time because it's accurate enough. I like the following algorithm ("WTEMA)" better because it's simpler. It's a lot faster to calculate than LWMA, but after a million blocks LWMA with N=576 probably adds less than 30 seconds of validation. K = Kp + Kp*t/T/N - Kp/N K= current target Kp = prior target t = solvetime of Kp block (it's timestamp minus the timestamp before it) T = block time N = 100 to 300, half of LWMA to get the same response speed. But for coins using the Monero codebase this can't be used because there can't be a 1-block delay in Kp or the solvetime which is the timestamp of Kp minus the timestamp of the block before it. I can't remember, but there was something about the last timestamp or difficulty not being easily available to the difficulty algorithm.

it's based on BTC, the question was more about LWMA if it's still relevant since I'm a little lost on the LWMA vulnerability dates As for WTEMA, I don’t quite understand how to implement this because I can’t understand the essence and haven’t found examples like with lwma, I can implement it incorrectly and it will turn out to be vulnerable devilry if it's simpler than LWMA, it's better as later i will need implement it in Python, Go, js, etc. The simpler the solution, the better. It is also important that the algorithm does not break on 32-bit and exotic architectures. I don’t remember exactly, but somewhere in some algorithms there were problems with mathematics at RISK or ARM systems.

if use some other than BTC native algo "retarget setting" - nPowTargetTimespan do notting. and qustion was more about LWMA is latest version fixed and there was no new bugs ? How i undestand good settings for 5m is N 576

if use some other than BTC native algo "retarget setting" - nPowTargetTimespan do notting. and qustion was more about LWMA is latest version fixed and there was no new bugs ? How i undestand good settings for 5m is N 576

Chainprams.cpp consensus.npowtargettimespan. Also make sure the no re targeting is false. The code snippet is only for pow.cpp. You will need follow each request of a variable or a constant from the source code file.

yes, I understand what parameters are responsible for what , the question was more about the relevance of the algorithms and the recommended parameters for a 5-minute block, it is clear that it is not entirely correct to say the "difficulty recalculation time" since it is based on the past "N" blocks And of course, all unused parameters are commented out in the same way as exceptions for non-existent blocks in the new blockchain, for example BIP 30

    // We don have that blocks so skip BIP30 tx's
    /*
    bool fEnforceBIP30 = !((pindex->nHeight==91722 && pindex->GetBlockHash() == uint256S("0x00000000000271a2dc26e7667f8419f2e15416dc6955e5a6c6cdf3f2574dd08e")) ||
                           (pindex->nHeight==91812 && pindex->GetBlockHash() == uint256S("0x00000000000af0aed4792b1acee3d966af36cf5def14935db8de83d6f9306f2f")));
    */

    bool fEnforceBIP30 = true;

    // We don have that blocks so skip BIP30 tx's

    /*
    if (fEnforceBIP30 || pindex->nHeight >= BIP34_IMPLIES_BIP30_LIMIT) {
    */

    if (fEnforceBIP30) {
        for (const auto& tx : block.vtx) {
            for (size_t o = 0; o < tx->vout.size(); o++) {
                if (view.HaveCoin(COutPoint(tx->GetHash(), o))) {
// We don have that blocks so skip BIP30

/*bool IsBIP30Repeat(const CBlockIndex& block_index)
{
    return (block_index.nHeight==91842 && block_index.GetBlockHash() == uint256S("0x00000000000a4d0a398161ffc163c503763b1f4360639393e0e4c8e300e0caec")) ||
           (block_index.nHeight==91880 && block_index.GetBlockHash() == uint256S("0x00000000000743f190a18c5577a3c2d2a1f610ae9601ac046a38084ccb7cd721"));
@@ -5901,7 +5923,7 @@ bool IsBIP30Unspendable(const CBlockIndex& block_index)
{
    return (block_index.nHeight==91722 && block_index.GetBlockHash() == uint256S("0x00000000000271a2dc26e7667f8419f2e15416dc6955e5a6c6cdf3f2574dd08e")) ||
           (block_index.nHeight==91812 && block_index.GetBlockHash() == uint256S("0x00000000000af0aed4792b1acee3d966af36cf5def14935db8de83d6f9306f2f"));
}
}*/

To be honest I played with testnet and N, FTL

N above 300 will be good and there isn't a better algorithm, but there are several as good. I would not go higher than 600 for any block time because it's accurate enough. I like the following algorithm ("WTEMA)" better because it's simpler. It's a lot faster to calculate than LWMA, but after a million blocks LWMA with N=576 probably adds less than 30 seconds of validation. K = Kp + Kp*t/T/N - Kp/N K= current target Kp = prior target t = solvetime of Kp block (it's timestamp minus the timestamp before it) T = block time N = 100 to 300, half of LWMA to get the same response speed. But for coins using the Monero codebase this can't be used because there can't be a 1-block delay in Kp or the solvetime which is the timestamp of Kp minus the timestamp of the block before it. I can't remember, but there was something about the last timestamp or difficulty not being easily available to the difficulty algorithm.

it's based on BTC, the question was more about LWMA if it's still relevant since I'm a little lost on the LWMA vulnerability dates As for WTEMA, I don’t quite understand how to implement this because I can’t understand the essence and haven’t found examples like with lwma, I can implement it incorrectly and it will turn out to be vulnerable devilry if it's simpler than LWMA, it's better as later i will need implement it in Python, Go, js, etc. The simpler the solution, the better. It is also important that the algorithm does not break on 32-bit and exotic architectures. I don’t remember exactly, but somewhere in some algorithms there were problems with mathematics at RISK or ARM systems.

if use some other than BTC native algo "retarget setting" - nPowTargetTimespan do notting. and qustion was more about LWMA is latest version fixed and there was no new bugs ? How i undestand good settings for 5m is N 576

if use some other than BTC native algo "retarget setting" - nPowTargetTimespan do notting. and qustion was more about LWMA is latest version fixed and there was no new bugs ? How i undestand good settings for 5m is N 576

Chainprams.cpp consensus.npowtargettimespan. Also make sure the no re targeting is false. The code snippet is only for pow.cpp. You will need follow each request of a variable or a constant from the source code file.

yes, I understand what parameters are responsible for what , the question was more about the relevance of the algorithms and the recommended parameters for a 5-minute block, it is clear that it is not entirely correct to say the "difficulty recalculation time" since it is based on the past "N" blocks And of course, all unused parameters are commented out in the same way as exceptions for non-existent blocks in the new blockchain, for example BIP 30

    // We don have that blocks so skip BIP30 tx's
    /*
    bool fEnforceBIP30 = !((pindex->nHeight==91722 && pindex->GetBlockHash() == uint256S("0x00000000000271a2dc26e7667f8419f2e15416dc6955e5a6c6cdf3f2574dd08e")) ||
                           (pindex->nHeight==91812 && pindex->GetBlockHash() == uint256S("0x00000000000af0aed4792b1acee3d966af36cf5def14935db8de83d6f9306f2f")));
    */

    bool fEnforceBIP30 = true;

    // We don have that blocks so skip BIP30 tx's

    /*
    if (fEnforceBIP30 || pindex->nHeight >= BIP34_IMPLIES_BIP30_LIMIT) {
    */

    if (fEnforceBIP30) {
        for (const auto& tx : block.vtx) {
            for (size_t o = 0; o < tx->vout.size(); o++) {
                if (view.HaveCoin(COutPoint(tx->GetHash(), o))) {
// We don have that blocks so skip BIP30

/*bool IsBIP30Repeat(const CBlockIndex& block_index)
{
    return (block_index.nHeight==91842 && block_index.GetBlockHash() == uint256S("0x00000000000a4d0a398161ffc163c503763b1f4360639393e0e4c8e300e0caec")) ||
           (block_index.nHeight==91880 && block_index.GetBlockHash() == uint256S("0x00000000000743f190a18c5577a3c2d2a1f610ae9601ac046a38084ccb7cd721"));
@@ -5901,7 +5923,7 @@ bool IsBIP30Unspendable(const CBlockIndex& block_index)
{
    return (block_index.nHeight==91722 && block_index.GetBlockHash() == uint256S("0x00000000000271a2dc26e7667f8419f2e15416dc6955e5a6c6cdf3f2574dd08e")) ||
           (block_index.nHeight==91812 && block_index.GetBlockHash() == uint256S("0x00000000000af0aed4792b1acee3d966af36cf5def14935db8de83d6f9306f2f"));
}
}*/

Ignore xbuffertsa

  1. GetNextWorkRequired : This function takes in the last block index ( pindexLast ), the block header ( pblock ), and the consensus parameters ( params ). It first checks if the pindexLast is null, and if so, throws an exception. Then, it gets the height of the last block and increments it by 1. If the height is greater than or equal to a specific height value ( params.XCGZawyLWMAHeight ), it calls the LwmaGetNextWorkRequired function with the same parameters. Otherwise, it calls the GetNextWorkRequiredBTC function.
  2. LwmaGetNextWorkRequired : This function is called when the block height is greater than or equal to params.XCGZawyLWMAHeight . It first checks if the pindexLast is null, and if so, throws an exception. Then, it checks if a specific parameter ( params.XbufferTSAH ) is less than the height of the last block. If true, it calls the XbufferTSA function with the block header, consensus parameters, and the previous block index. Otherwise, it calls the LwmaCalculateNextWorkRequired function with the last block index and consensus parameters.
  3. XbufferTSA : This function calculates the next target difficulty for a block using the XbufferTSA algorithm. It first initializes some variables and constants. Then, it calculates an initial target guess based on a hashrate guess and the target spacing. If the target guess is greater than the maximum allowed target ( powLimit ), it sets the target guess to the maximum target. Next, it checks if the height is less than or equal to the averaging window size plus 1. If true, it returns the target guess as the compact representation. Otherwise, it performs a series of calculations to determine the next target based on the previous blocks' timestamps and targets. Finally, it checks if the block timestamp is before a specific time and adjusts the target if necessary. The final target is returned as the compact representation.
  4. LwmaCalculateNextWorkRequired : This function is called when the block height is less than params.XCGZawyLWMAHeight and is used to calculate the next target difficulty using the Lwma algorithm. If the fPowNoRetargeting parameter is true, it simply returns the current block's bits. Otherwise, it initializes some variables and constants. Then, it iterates over the previous blocks within the averaging window and calculates a sum of targets and a weighted average time. After the loop, it calculates the next target based on the sum of targets and the weighted average time. Finally, it returns the next target as the compact representation.
  5. GetNextWorkRequiredBTC : This function is called when the block height is not greater than or equal to params.XCGZawyLWMAHeight and is used to calculate the next target difficulty using the Bitcoin algorithm. It first initializes the proof-of-work limit ( nProofOfWorkLimit ) based on the consensus parameters. Then, it checks if the pindexLast is null and if so, returns the proof-of-work limit. Next, it checks if the block height plus 1 is not divisible by the difficulty adjustment interval. If true, it returns the current block's bits. Otherwise, it calculates the first block index within the difficulty adjustment interval and calls the CalculateNextWorkRequired function with the last block index, the first block's timestamp, and the consensus parameters. The result is returned as the compact representation.
  6. Use N, ftl as suggested and do hashrate charts to tweak. Adjust for v3

Fyi the reason i am doing this: if you understand the logic then you can even do it live on a blockchain at a set block. It's a good benefit

src/pow.cppp

unsigned int GetNextWorkRequired(const CBlockIndex* pindexLast, const CBlockHeader *pblock, const Consensus::Params& params)
{

    assert(pindexLast != nullptr);
    unsigned int nProofOfWorkLimit = UintToArith256(params.powLimit).GetCompact();
    /*

    // Only change once per difficulty adjustment interval
    if ((pindexLast->nHeight+1) % params.DifficultyAdjustmentInterval() != 0)
    {
        if (params.fPowAllowMinDifficultyBlocks)
        {
            // Special difficulty rule for testnet:
            // If the new block's timestamp is more than 2* 10 minutes
            // then allow mining of a min-difficulty block.
            if (pblock->GetBlockTime() > pindexLast->GetBlockTime() + params.nPowTargetSpacing*2)
                return nProofOfWorkLimit;
            else
            {
                // Return the last non-special-min-difficulty-rules-block
                const CBlockIndex* pindex = pindexLast;
                while (pindex->pprev && pindex->nHeight % params.DifficultyAdjustmentInterval() != 0 && pindex->nBits == nProofOfWorkLimit)
                    pindex = pindex->pprev;
                return pindex->nBits;
            }
        }
        return pindexLast->nBits;
    }

    // Go back by what we want to be 14 days worth of blocks

    int nHeightFirst = pindexLast->nHeight - (params.DifficultyAdjustmentInterval()-1);
    assert(nHeightFirst >= 0);

    if (nHeightFirst < 0) {
        return nProofOfWorkLimit;
    }

    const CBlockIndex* pindexFirst = pindexLast->GetAncestor(nHeightFirst);
    assert(pindexFirst);
    */
    return Lwma3CalculateNextWorkRequired(pindexLast, params);
}

unsigned int CalculateNextWorkRequired(const CBlockIndex* pindexLast, int64_t nFirstBlockTime, const Consensus::Params& params)
{
    if (params.fPowNoRetargeting)
        return pindexLast->nBits;

    // Limit adjustment step
    int64_t nActualTimespan = pindexLast->GetBlockTime() - nFirstBlockTime;
    if (nActualTimespan < params.nPowTargetTimespan/4)
        nActualTimespan = params.nPowTargetTimespan/4;
    if (nActualTimespan > params.nPowTargetTimespan*4)
        nActualTimespan = params.nPowTargetTimespan*4;

    // Retarget
    const arith_uint256 bnPowLimit = UintToArith256(params.powLimit);
    arith_uint256 bnNew;
    bnNew.SetCompact(pindexLast->nBits);
    bnNew *= nActualTimespan;
    bnNew /= params.nPowTargetTimespan;

    if (bnNew > bnPowLimit)
        bnNew = bnPowLimit;

    return bnNew.GetCompact();
}

// LWMA-1 for BTC & Zcash clones
// Copyright (c) 2017-2019 The Bitcoin Gold developers, Zawy, iamstenman (Microbitcoin)
// MIT License
// Algorithm by Zawy, a modification of WT-144 by Tom Harding
// For updates see
// https://github.com/zawy12/difficulty-algorithms/issues/3#issuecomment-442129791
// Do not use Zcash's / Digishield's method of ignoring the ~6 most recent 
// timestamps via the median past timestamp (MTP of 11).
// Changing MTP to 1 instead of 11 enforces sequential timestamps. Not doing this was the
// most serious, problematic, & fundamental consensus theory mistake made in bitcoin but
// this change may require changes elsewhere such as creating block headers or what pools do.
//  FTL should be lowered to about N*T/20.
//  FTL in BTC clones is MAX_FUTURE_BLOCK_TIME in chain.h.
//  FTL in Ignition, Numus, and others can be found in main.h as DRIFT.
//  FTL in Zcash & Dash clones need to change the 2*60*60 here:
//  if (block.GetBlockTime() > nAdjustedTime + 2 * 60 * 60)
//  which is around line 3700 in main.cpp in ZEC and validation.cpp in Dash
//  If your coin uses median network time instead of node's time, the "revert to 
//  node time" rule (70 minutes in BCH, ZEC, & BTC) should be reduced to FTL/2 
//  to prevent 33% Sybil attack that can manipulate difficulty via timestamps. See:
// https://github.com/zcash/zcash/issues/4021

unsigned int Lwma3CalculateNextWorkRequired(const CBlockIndex* pindexLast, const Consensus::Params& params)
{
    const int64_t T = params.nPowTargetSpacing;

  // For T=600 use N=288 (takes 2 days to fully respond to hashrate changes) and has 
  //  a StdDev of N^(-0.5) which will often be the change in difficulty in N/4 blocks when hashrate is 
  // constant. 10% of blocks will have an error >2x the StdDev above or below where D should be. 
  //  This N=288 is like N=144 in ASERT which is N=144*ln(2)=100 in 
  // terms of BCH's ASERT.  BCH's ASERT uses N=288 which is like 2*288/ln(2) = 831 = N for 
  // LWMA. ASERT and LWMA are almost indistinguishable once this adjustment to N is used. In other words,
  // 831/144 = 5.8 means my N=144 recommendation for T=600 is 5.8 times faster but SQRT(5.8) less 
  // stability than BCH's ASERT. The StdDev for 288 is 6%, so 12% accidental variation will be see in 10% of blocks.
  // Twice 288 is 576 which will have 4.2% StdDev and be 2x slower. This is reasonable for T=300 or less.
  // For T = 60, N=1,000 will have 3% StdDev & maybe plenty fast, but require 1M multiplications & additions per 
  // 1,000 blocks for validation which might be a consideration. I would not go over N=576 and prefer 360
  // so that it can respond in 6 hours to hashrate changes.

    const int64_t N = params.lwmaAveragingWindow;

    // Low diff blocks for diff initiation.
    const int64_t L = 577; // expected + 1 for sesucre reason afte give away blocks.

    // Define a k that will be used to get a proper average after weighting the solvetimes.
    const int64_t k = N * (N + 1) * T / 2; 

    const int64_t height = pindexLast->nHeight;
    const arith_uint256 powLimit = UintToArith256(params.powLimit);

   // New coins just "give away" first N blocks. It's better to guess
   // this value instead of using powLimit, but err on high side to not get stuck.
    if (params.fPowAllowMinDifficultyBlocks) { return powLimit.GetCompact(); }
    if (height <= L) { return powLimit.GetCompact(); }

    arith_uint256 avgTarget, nextTarget;
    int64_t thisTimestamp, previousTimestamp;
    int64_t sumWeightedSolvetimes = 0, j = 0;

    const CBlockIndex* blockPreviousTimestamp = pindexLast->GetAncestor(height - N);
    previousTimestamp = blockPreviousTimestamp->GetBlockTime();

    // Loop through N most recent blocks. 
    for (int64_t i = height - N + 1; i <= height; i++) {
        const CBlockIndex* block = pindexLast->GetAncestor(i);

        // Prevent solvetimes from being negative in a safe way. It must be done like this. 
        // Do not attempt anything like  if (solvetime < 1) {solvetime=1;}
        // The +1 ensures new coins do not calculate nextTarget = 0.
        thisTimestamp = (block->GetBlockTime() > previousTimestamp) ? 
                            block->GetBlockTime() : previousTimestamp + 1;

       // 6*T limit prevents large drops in diff from long solvetimes which would cause oscillations.
        int64_t solvetime = std::min(6 * T, thisTimestamp - previousTimestamp);

       // The following is part of "preventing negative solvetimes". 
        previousTimestamp = thisTimestamp;

       // Give linearly higher weight to more recent solvetimes.
        j++;
        sumWeightedSolvetimes += solvetime * j; 

        arith_uint256 target;
        target.SetCompact(block->nBits);
        avgTarget += target / N / k; // Dividing by k here prevents an overflow below.
    }
    nextTarget = avgTarget * sumWeightedSolvetimes; 

    if (nextTarget > powLimit) { nextTarget = powLimit; }

    return nextTarget.GetCompact();
}

// Check that on difficulty adjustments, the new difficulty does not increase
// or decrease beyond the permitted limits.
bool PermittedDifficultyTransition(const Consensus::Params& params, int64_t height, uint32_t old_nbits, uint32_t new_nbits)
{
    /*
    if (params.fPowAllowMinDifficultyBlocks) return true;

    if (height % params.DifficultyAdjustmentInterval() == 0) {
        int64_t smallest_timespan = params.nPowTargetTimespan/4;
        int64_t largest_timespan = params.nPowTargetTimespan*4;

        const arith_uint256 pow_limit = UintToArith256(params.powLimit);
        arith_uint256 observed_new_target;
        observed_new_target.SetCompact(new_nbits);

        // Calculate the largest difficulty value possible:
        arith_uint256 largest_difficulty_target;
        largest_difficulty_target.SetCompact(old_nbits);
        largest_difficulty_target *= largest_timespan;
        largest_difficulty_target /= params.nPowTargetTimespan;

        if (largest_difficulty_target > pow_limit) {
            largest_difficulty_target = pow_limit;
        }

        // Round and then compare this new calculated value to what is
        // observed.
        arith_uint256 maximum_new_target;
        maximum_new_target.SetCompact(largest_difficulty_target.GetCompact());
        if (maximum_new_target < observed_new_target) return false;

        // Calculate the smallest difficulty value possible:
        arith_uint256 smallest_difficulty_target;
        smallest_difficulty_target.SetCompact(old_nbits);
        smallest_difficulty_target *= smallest_timespan;
        smallest_difficulty_target /= params.nPowTargetTimespan;

        if (smallest_difficulty_target > pow_limit) {
            smallest_difficulty_target = pow_limit;
        }

        // Round and then compare this new calculated value to what is
        // observed.
        arith_uint256 minimum_new_target;
        minimum_new_target.SetCompact(smallest_difficulty_target.GetCompact());
        if (minimum_new_target > observed_new_target) return false;
    } else if (old_nbits != new_nbits) {
        return false;
    }
    */
    return true;
}

bool CheckProofOfWork(uint256 hash, unsigned int nBits, const Consensus::Params& params)
{
    bool fNegative;
    bool fOverflow;
    arith_uint256 bnTarget;

    bnTarget.SetCompact(nBits, &fNegative, &fOverflow);

    // Check range
    if (fNegative || bnTarget == 0 || fOverflow || bnTarget > UintToArith256(params.powLimit))
        return false;

    // Check proof of work matches claimed amount
    if (UintToArith256(hash) > bnTarget)
        return false;

    return true;
}

src/kernel/chainparams.cpp

        consensus.nPowTargetSpacing = 5 * 60; / 5m
        consensus.lwmaAveragingWindow = 576;  / 48 hours by block time. 5* 60 * 576 

you can also see this consensus.nMinerConfirmationWindow = 2016; // nPowTargetTimespan / nPowTargetSpacing ( but as we not use at our retargeting algo param we can ignore "nPowTargetSpacing" even remove it from code if we will not use anyting that used it.)

src/chain.h

static constexpr int64_t MAX_FUTURE_BLOCK_TIME = 432 * 20  // Block time 300 * N param 576 / 20 = 8 640 /  20 to get 432 then turn to 432 * 20 for use "*"

/**
 * Timestamp window used as a grace period by code that compares external
 * timestamps (such as timestamps passed to RPCs, or wallet key creation times)
 * to block timestamps. This should be set at least as high as
 * MAX_FUTURE_BLOCK_TIME.
 */
static constexpr int64_t TIMESTAMP_WINDOW = MAX_FUTURE_BLOCK_TIME;

/**
 * Maximum gap between node time and block time used
 * for the "Catching up..." mode in GUI.
 *
 * Ref: https://github.com/bitcoin/bitcoin/pull/1026
 */
static constexpr int64_t MAX_BLOCK_TIME_GAP = 300 * 12 // Block time 300 * 12 = 3600

src/timedata.h

static const int64_t DEFAULT_MAX_TIME_ADJUSTMENT = 72 * 60; // this is problematic by rule  MAX_FUTURE_BLOCK_TIME / 2; we get even more tha default btc 70 * 60 for 10m block   8640 / 2 =   4320 / 60 = 72``

src/net_processing.cpp

/** How frequently to check for stale tips */
static constexpr auto STALE_CHECK_INTERVAL{5min}; // btc is 10m for 10m blcok.
this  params is huse qustion need it adapt or not.

/// Age after which a stale block will no longer be served if requested as
/// protection against fingerprinting. Set to one month, denominated in seconds.
static constexpr int STALE_RELAY_AGE_LIMIT = 30 * 24 * 60 * 60;
/// Age after which a block is considered historical for purposes of rate
/// limiting block relay. Set to one week, denominated in seconds.
static constexpr int HISTORICAL_BLOCK_AGE = 7 * 24 * 60 * 60;

src/headerssync.cpp or use bitcoin script for it for get values.

//! Store one header commitment per HEADER_COMMITMENT_PERIOD blocks.
constexpr size_t HEADER_COMMITMENT_PERIOD{330}; // value / 2 ?

//! Only feed headers to validation once this many headers on top have been
//! received and validated against commitments.
constexpr size_t REDOWNLOAD_BUFFER_SIZE{7221}; // 14441/606 = ~23.8 commitments // value / 2 ?
zawy12 commented 7 months ago

Mimblewimble has this: let next_diff = last_diff * WTEMA_HALF_LIFE / (WTEMA_HALF_LIFE - BLOCK_TIME_SEC + last_block_time); Which is the same as the "K" equation I gave above after converting mine from target to difficulty and substituting using half_life = T * N. To show why their "half_life" wording is almost correct, WTEMA is a very close approximation to replacing e^x with 1+x (to get ASERT) so it's like this:

T = BLOCK_TIME_SEC
T * N = WTEMA_HALF_LIFE in seconds
t = last_block_time (aka solvetime, i.e. t = last_timestamp - last_last_timestamp)
next_diff = last_diff / e^(-1/N + t/T/N)
next_diff = last_diff * e^((-(t/T-1))/N)  # error signal is t/T-1 and mean life_time to kill the error is N blocks
next_diff = last_diff * e^(-(t-T)/(T * N))

Their "error signal" in seconds is t-T and their "half_life" in seconds is the "mean lifetime" to eliminate the error because it's base e instead of 2.

cryptforall commented 7 months ago

N above 300 will be good and there isn't a better algorithm, but there are several as good. I would not go higher than 600 for any block time because it's accurate enough. I like the following algorithm ("WTEMA)" better because it's simpler. It's a lot faster to calculate than LWMA, but after a million blocks LWMA with N=576 probably adds less than 30 seconds of validation. K = Kp + Kp*t/T/N - Kp/N K= current target Kp = prior target t = solvetime of Kp block (it's timestamp minus the timestamp before it) T = block time N = 100 to 300, half of LWMA to get the same response speed. But for coins using the Monero codebase this can't be used because there can't be a 1-block delay in Kp or the solvetime which is the timestamp of Kp minus the timestamp of the block before it. I can't remember, but there was something about the last timestamp or difficulty not being easily available to the difficulty algorithm.

it's based on BTC, the question was more about LWMA if it's still relevant since I'm a little lost on the LWMA vulnerability dates As for WTEMA, I don’t quite understand how to implement this because I can’t understand the essence and haven’t found examples like with lwma, I can implement it incorrectly and it will turn out to be vulnerable devilry if it's simpler than LWMA, it's better as later i will need implement it in Python, Go, js, etc. The simpler the solution, the better. It is also important that the algorithm does not break on 32-bit and exotic architectures. I don’t remember exactly, but somewhere in some algorithms there were problems with mathematics at RISK or ARM systems.

if use some other than BTC native algo "retarget setting" - nPowTargetTimespan do notting. and qustion was more about LWMA is latest version fixed and there was no new bugs ? How i undestand good settings for 5m is N 576

if use some other than BTC native algo "retarget setting" - nPowTargetTimespan do notting. and qustion was more about LWMA is latest version fixed and there was no new bugs ? How i undestand good settings for 5m is N 576

Chainprams.cpp consensus.npowtargettimespan. Also make sure the no re targeting is false. The code snippet is only for pow.cpp. You will need follow each request of a variable or a constant from the source code file.

yes, I understand what parameters are responsible for what , the question was more about the relevance of the algorithms and the recommended parameters for a 5-minute block, it is clear that it is not entirely correct to say the "difficulty recalculation time" since it is based on the past "N" blocks And of course, all unused parameters are commented out in the same way as exceptions for non-existent blocks in the new blockchain, for example BIP 30

    // We don have that blocks so skip BIP30 tx's
    /*
    bool fEnforceBIP30 = !((pindex->nHeight==91722 && pindex->GetBlockHash() == uint256S("0x00000000000271a2dc26e7667f8419f2e15416dc6955e5a6c6cdf3f2574dd08e")) ||
                           (pindex->nHeight==91812 && pindex->GetBlockHash() == uint256S("0x00000000000af0aed4792b1acee3d966af36cf5def14935db8de83d6f9306f2f")));
    */

    bool fEnforceBIP30 = true;

    // We don have that blocks so skip BIP30 tx's

    /*
    if (fEnforceBIP30 || pindex->nHeight >= BIP34_IMPLIES_BIP30_LIMIT) {
    */

    if (fEnforceBIP30) {
        for (const auto& tx : block.vtx) {
            for (size_t o = 0; o < tx->vout.size(); o++) {
                if (view.HaveCoin(COutPoint(tx->GetHash(), o))) {
// We don have that blocks so skip BIP30

/*bool IsBIP30Repeat(const CBlockIndex& block_index)
{
    return (block_index.nHeight==91842 && block_index.GetBlockHash() == uint256S("0x00000000000a4d0a398161ffc163c503763b1f4360639393e0e4c8e300e0caec")) ||
           (block_index.nHeight==91880 && block_index.GetBlockHash() == uint256S("0x00000000000743f190a18c5577a3c2d2a1f610ae9601ac046a38084ccb7cd721"));
@@ -5901,7 +5923,7 @@ bool IsBIP30Unspendable(const CBlockIndex& block_index)
{
    return (block_index.nHeight==91722 && block_index.GetBlockHash() == uint256S("0x00000000000271a2dc26e7667f8419f2e15416dc6955e5a6c6cdf3f2574dd08e")) ||
           (block_index.nHeight==91812 && block_index.GetBlockHash() == uint256S("0x00000000000af0aed4792b1acee3d966af36cf5def14935db8de83d6f9306f2f"));
}
}*/

To be honest I played with testnet and N, FTL

N above 300 will be good and there isn't a better algorithm, but there are several as good. I would not go higher than 600 for any block time because it's accurate enough. I like the following algorithm ("WTEMA)" better because it's simpler. It's a lot faster to calculate than LWMA, but after a million blocks LWMA with N=576 probably adds less than 30 seconds of validation. K = Kp + Kp*t/T/N - Kp/N K= current target Kp = prior target t = solvetime of Kp block (it's timestamp minus the timestamp before it) T = block time N = 100 to 300, half of LWMA to get the same response speed. But for coins using the Monero codebase this can't be used because there can't be a 1-block delay in Kp or the solvetime which is the timestamp of Kp minus the timestamp of the block before it. I can't remember, but there was something about the last timestamp or difficulty not being easily available to the difficulty algorithm.

it's based on BTC, the question was more about LWMA if it's still relevant since I'm a little lost on the LWMA vulnerability dates As for WTEMA, I don’t quite understand how to implement this because I can’t understand the essence and haven’t found examples like with lwma, I can implement it incorrectly and it will turn out to be vulnerable devilry if it's simpler than LWMA, it's better as later i will need implement it in Python, Go, js, etc. The simpler the solution, the better. It is also important that the algorithm does not break on 32-bit and exotic architectures. I don’t remember exactly, but somewhere in some algorithms there were problems with mathematics at RISK or ARM systems.

if use some other than BTC native algo "retarget setting" - nPowTargetTimespan do notting. and qustion was more about LWMA is latest version fixed and there was no new bugs ? How i undestand good settings for 5m is N 576

if use some other than BTC native algo "retarget setting" - nPowTargetTimespan do notting. and qustion was more about LWMA is latest version fixed and there was no new bugs ? How i undestand good settings for 5m is N 576

Chainprams.cpp consensus.npowtargettimespan. Also make sure the no re targeting is false. The code snippet is only for pow.cpp. You will need follow each request of a variable or a constant from the source code file.

yes, I understand what parameters are responsible for what , the question was more about the relevance of the algorithms and the recommended parameters for a 5-minute block, it is clear that it is not entirely correct to say the "difficulty recalculation time" since it is based on the past "N" blocks And of course, all unused parameters are commented out in the same way as exceptions for non-existent blocks in the new blockchain, for example BIP 30

    // We don have that blocks so skip BIP30 tx's
    /*
    bool fEnforceBIP30 = !((pindex->nHeight==91722 && pindex->GetBlockHash() == uint256S("0x00000000000271a2dc26e7667f8419f2e15416dc6955e5a6c6cdf3f2574dd08e")) ||
                           (pindex->nHeight==91812 && pindex->GetBlockHash() == uint256S("0x00000000000af0aed4792b1acee3d966af36cf5def14935db8de83d6f9306f2f")));
    */

    bool fEnforceBIP30 = true;

    // We don have that blocks so skip BIP30 tx's

    /*
    if (fEnforceBIP30 || pindex->nHeight >= BIP34_IMPLIES_BIP30_LIMIT) {
    */

    if (fEnforceBIP30) {
        for (const auto& tx : block.vtx) {
            for (size_t o = 0; o < tx->vout.size(); o++) {
                if (view.HaveCoin(COutPoint(tx->GetHash(), o))) {
// We don have that blocks so skip BIP30

/*bool IsBIP30Repeat(const CBlockIndex& block_index)
{
    return (block_index.nHeight==91842 && block_index.GetBlockHash() == uint256S("0x00000000000a4d0a398161ffc163c503763b1f4360639393e0e4c8e300e0caec")) ||
           (block_index.nHeight==91880 && block_index.GetBlockHash() == uint256S("0x00000000000743f190a18c5577a3c2d2a1f610ae9601ac046a38084ccb7cd721"));
@@ -5901,7 +5923,7 @@ bool IsBIP30Unspendable(const CBlockIndex& block_index)
{
    return (block_index.nHeight==91722 && block_index.GetBlockHash() == uint256S("0x00000000000271a2dc26e7667f8419f2e15416dc6955e5a6c6cdf3f2574dd08e")) ||
           (block_index.nHeight==91812 && block_index.GetBlockHash() == uint256S("0x00000000000af0aed4792b1acee3d966af36cf5def14935db8de83d6f9306f2f"));
}
}*/

Ignore xbuffertsa

  1. GetNextWorkRequired : This function takes in the last block index ( pindexLast ), the block header ( pblock ), and the consensus parameters ( params ). It first checks if the pindexLast is null, and if so, throws an exception. Then, it gets the height of the last block and increments it by 1. If the height is greater than or equal to a specific height value ( params.XCGZawyLWMAHeight ), it calls the LwmaGetNextWorkRequired function with the same parameters. Otherwise, it calls the GetNextWorkRequiredBTC function.
  2. LwmaGetNextWorkRequired : This function is called when the block height is greater than or equal to params.XCGZawyLWMAHeight . It first checks if the pindexLast is null, and if so, throws an exception. Then, it checks if a specific parameter ( params.XbufferTSAH ) is less than the height of the last block. If true, it calls the XbufferTSA function with the block header, consensus parameters, and the previous block index. Otherwise, it calls the LwmaCalculateNextWorkRequired function with the last block index and consensus parameters.
  3. XbufferTSA : This function calculates the next target difficulty for a block using the XbufferTSA algorithm. It first initializes some variables and constants. Then, it calculates an initial target guess based on a hashrate guess and the target spacing. If the target guess is greater than the maximum allowed target ( powLimit ), it sets the target guess to the maximum target. Next, it checks if the height is less than or equal to the averaging window size plus 1. If true, it returns the target guess as the compact representation. Otherwise, it performs a series of calculations to determine the next target based on the previous blocks' timestamps and targets. Finally, it checks if the block timestamp is before a specific time and adjusts the target if necessary. The final target is returned as the compact representation.
  4. LwmaCalculateNextWorkRequired : This function is called when the block height is less than params.XCGZawyLWMAHeight and is used to calculate the next target difficulty using the Lwma algorithm. If the fPowNoRetargeting parameter is true, it simply returns the current block's bits. Otherwise, it initializes some variables and constants. Then, it iterates over the previous blocks within the averaging window and calculates a sum of targets and a weighted average time. After the loop, it calculates the next target based on the sum of targets and the weighted average time. Finally, it returns the next target as the compact representation.
  5. GetNextWorkRequiredBTC : This function is called when the block height is not greater than or equal to params.XCGZawyLWMAHeight and is used to calculate the next target difficulty using the Bitcoin algorithm. It first initializes the proof-of-work limit ( nProofOfWorkLimit ) based on the consensus parameters. Then, it checks if the pindexLast is null and if so, returns the proof-of-work limit. Next, it checks if the block height plus 1 is not divisible by the difficulty adjustment interval. If true, it returns the current block's bits. Otherwise, it calculates the first block index within the difficulty adjustment interval and calls the CalculateNextWorkRequired function with the last block index, the first block's timestamp, and the consensus parameters. The result is returned as the compact representation.
  6. Use N, ftl as suggested and do hashrate charts to tweak. Adjust for v3

Fyi the reason i am doing this: if you understand the logic then you can even do it live on a blockchain at a set block. It's a good benefit

src/pow.cppp

unsigned int GetNextWorkRequired(const CBlockIndex* pindexLast, const CBlockHeader *pblock, const Consensus::Params& params)
{

    assert(pindexLast != nullptr);
    unsigned int nProofOfWorkLimit = UintToArith256(params.powLimit).GetCompact();
  /*

    // Only change once per difficulty adjustment interval
    if ((pindexLast->nHeight+1) % params.DifficultyAdjustmentInterval() != 0)
    {
        if (params.fPowAllowMinDifficultyBlocks)
        {
            // Special difficulty rule for testnet:
            // If the new block's timestamp is more than 2* 10 minutes
            // then allow mining of a min-difficulty block.
            if (pblock->GetBlockTime() > pindexLast->GetBlockTime() + params.nPowTargetSpacing*2)
                return nProofOfWorkLimit;
            else
            {
                // Return the last non-special-min-difficulty-rules-block
                const CBlockIndex* pindex = pindexLast;
                while (pindex->pprev && pindex->nHeight % params.DifficultyAdjustmentInterval() != 0 && pindex->nBits == nProofOfWorkLimit)
                    pindex = pindex->pprev;
                return pindex->nBits;
            }
        }
        return pindexLast->nBits;
    }

    // Go back by what we want to be 14 days worth of blocks

    int nHeightFirst = pindexLast->nHeight - (params.DifficultyAdjustmentInterval()-1);
    assert(nHeightFirst >= 0);

    if (nHeightFirst < 0) {
        return nProofOfWorkLimit;
    }

    const CBlockIndex* pindexFirst = pindexLast->GetAncestor(nHeightFirst);
    assert(pindexFirst);
    */
    return Lwma3CalculateNextWorkRequired(pindexLast, params);
}

unsigned int CalculateNextWorkRequired(const CBlockIndex* pindexLast, int64_t nFirstBlockTime, const Consensus::Params& params)
{
    if (params.fPowNoRetargeting)
        return pindexLast->nBits;

    // Limit adjustment step
    int64_t nActualTimespan = pindexLast->GetBlockTime() - nFirstBlockTime;
    if (nActualTimespan < params.nPowTargetTimespan/4)
        nActualTimespan = params.nPowTargetTimespan/4;
    if (nActualTimespan > params.nPowTargetTimespan*4)
        nActualTimespan = params.nPowTargetTimespan*4;

    // Retarget
    const arith_uint256 bnPowLimit = UintToArith256(params.powLimit);
    arith_uint256 bnNew;
    bnNew.SetCompact(pindexLast->nBits);
    bnNew *= nActualTimespan;
    bnNew /= params.nPowTargetTimespan;

    if (bnNew > bnPowLimit)
        bnNew = bnPowLimit;

    return bnNew.GetCompact();
}

// LWMA-1 for BTC & Zcash clones
// Copyright (c) 2017-2019 The Bitcoin Gold developers, Zawy, iamstenman (Microbitcoin)
// MIT License
// Algorithm by Zawy, a modification of WT-144 by Tom Harding
// For updates see
// https://github.com/zawy12/difficulty-algorithms/issues/3#issuecomment-442129791
// Do not use Zcash's / Digishield's method of ignoring the ~6 most recent 
// timestamps via the median past timestamp (MTP of 11).
// Changing MTP to 1 instead of 11 enforces sequential timestamps. Not doing this was the
// most serious, problematic, & fundamental consensus theory mistake made in bitcoin but
// this change may require changes elsewhere such as creating block headers or what pools do.
//  FTL should be lowered to about N*T/20.
//  FTL in BTC clones is MAX_FUTURE_BLOCK_TIME in chain.h.
//  FTL in Ignition, Numus, and others can be found in main.h as DRIFT.
//  FTL in Zcash & Dash clones need to change the 2*60*60 here:
//  if (block.GetBlockTime() > nAdjustedTime + 2 * 60 * 60)
//  which is around line 3700 in main.cpp in ZEC and validation.cpp in Dash
//  If your coin uses median network time instead of node's time, the "revert to 
//  node time" rule (70 minutes in BCH, ZEC, & BTC) should be reduced to FTL/2 
//  to prevent 33% Sybil attack that can manipulate difficulty via timestamps. See:
// https://github.com/zcash/zcash/issues/4021

unsigned int Lwma3CalculateNextWorkRequired(const CBlockIndex* pindexLast, const Consensus::Params& params)
{
    const int64_t T = params.nPowTargetSpacing;

  // For T=600 use N=288 (takes 2 days to fully respond to hashrate changes) and has 
  //  a StdDev of N^(-0.5) which will often be the change in difficulty in N/4 blocks when hashrate is 
  // constant. 10% of blocks will have an error >2x the StdDev above or below where D should be. 
  //  This N=288 is like N=144 in ASERT which is N=144*ln(2)=100 in 
  // terms of BCH's ASERT.  BCH's ASERT uses N=288 which is like 2*288/ln(2) = 831 = N for 
  // LWMA. ASERT and LWMA are almost indistinguishable once this adjustment to N is used. In other words,
  // 831/144 = 5.8 means my N=144 recommendation for T=600 is 5.8 times faster but SQRT(5.8) less 
  // stability than BCH's ASERT. The StdDev for 288 is 6%, so 12% accidental variation will be see in 10% of blocks.
  // Twice 288 is 576 which will have 4.2% StdDev and be 2x slower. This is reasonable for T=300 or less.
  // For T = 60, N=1,000 will have 3% StdDev & maybe plenty fast, but require 1M multiplications & additions per 
  // 1,000 blocks for validation which might be a consideration. I would not go over N=576 and prefer 360
  // so that it can respond in 6 hours to hashrate changes.

    const int64_t N = params.lwmaAveragingWindow;

  // Low diff blocks for diff initiation.
  const int64_t L = 577; // expected + 1 for sesucre reason afte give away blocks.

    // Define a k that will be used to get a proper average after weighting the solvetimes.
    const int64_t k = N * (N + 1) * T / 2; 

    const int64_t height = pindexLast->nHeight;
    const arith_uint256 powLimit = UintToArith256(params.powLimit);

   // New coins just "give away" first N blocks. It's better to guess
   // this value instead of using powLimit, but err on high side to not get stuck.
    if (params.fPowAllowMinDifficultyBlocks) { return powLimit.GetCompact(); }
    if (height <= L) { return powLimit.GetCompact(); }

    arith_uint256 avgTarget, nextTarget;
    int64_t thisTimestamp, previousTimestamp;
    int64_t sumWeightedSolvetimes = 0, j = 0;

    const CBlockIndex* blockPreviousTimestamp = pindexLast->GetAncestor(height - N);
    previousTimestamp = blockPreviousTimestamp->GetBlockTime();

    // Loop through N most recent blocks. 
    for (int64_t i = height - N + 1; i <= height; i++) {
        const CBlockIndex* block = pindexLast->GetAncestor(i);

        // Prevent solvetimes from being negative in a safe way. It must be done like this. 
        // Do not attempt anything like  if (solvetime < 1) {solvetime=1;}
        // The +1 ensures new coins do not calculate nextTarget = 0.
        thisTimestamp = (block->GetBlockTime() > previousTimestamp) ? 
                            block->GetBlockTime() : previousTimestamp + 1;

       // 6*T limit prevents large drops in diff from long solvetimes which would cause oscillations.
        int64_t solvetime = std::min(6 * T, thisTimestamp - previousTimestamp);

       // The following is part of "preventing negative solvetimes". 
        previousTimestamp = thisTimestamp;

       // Give linearly higher weight to more recent solvetimes.
        j++;
        sumWeightedSolvetimes += solvetime * j; 

        arith_uint256 target;
        target.SetCompact(block->nBits);
        avgTarget += target / N / k; // Dividing by k here prevents an overflow below.
    }
    nextTarget = avgTarget * sumWeightedSolvetimes; 

    if (nextTarget > powLimit) { nextTarget = powLimit; }

    return nextTarget.GetCompact();
}

// Check that on difficulty adjustments, the new difficulty does not increase
// or decrease beyond the permitted limits.
bool PermittedDifficultyTransition(const Consensus::Params& params, int64_t height, uint32_t old_nbits, uint32_t new_nbits)
{
    /*
    if (params.fPowAllowMinDifficultyBlocks) return true;

    if (height % params.DifficultyAdjustmentInterval() == 0) {
        int64_t smallest_timespan = params.nPowTargetTimespan/4;
        int64_t largest_timespan = params.nPowTargetTimespan*4;

        const arith_uint256 pow_limit = UintToArith256(params.powLimit);
        arith_uint256 observed_new_target;
        observed_new_target.SetCompact(new_nbits);

        // Calculate the largest difficulty value possible:
        arith_uint256 largest_difficulty_target;
        largest_difficulty_target.SetCompact(old_nbits);
        largest_difficulty_target *= largest_timespan;
        largest_difficulty_target /= params.nPowTargetTimespan;

        if (largest_difficulty_target > pow_limit) {
            largest_difficulty_target = pow_limit;
        }

        // Round and then compare this new calculated value to what is
        // observed.
        arith_uint256 maximum_new_target;
        maximum_new_target.SetCompact(largest_difficulty_target.GetCompact());
        if (maximum_new_target < observed_new_target) return false;

        // Calculate the smallest difficulty value possible:
        arith_uint256 smallest_difficulty_target;
        smallest_difficulty_target.SetCompact(old_nbits);
        smallest_difficulty_target *= smallest_timespan;
        smallest_difficulty_target /= params.nPowTargetTimespan;

        if (smallest_difficulty_target > pow_limit) {
            smallest_difficulty_target = pow_limit;
        }

        // Round and then compare this new calculated value to what is
        // observed.
        arith_uint256 minimum_new_target;
        minimum_new_target.SetCompact(smallest_difficulty_target.GetCompact());
        if (minimum_new_target > observed_new_target) return false;
    } else if (old_nbits != new_nbits) {
        return false;
    }
    */
    return true;
}

bool CheckProofOfWork(uint256 hash, unsigned int nBits, const Consensus::Params& params)
{
    bool fNegative;
    bool fOverflow;
    arith_uint256 bnTarget;

    bnTarget.SetCompact(nBits, &fNegative, &fOverflow);

    // Check range
    if (fNegative || bnTarget == 0 || fOverflow || bnTarget > UintToArith256(params.powLimit))
        return false;

    // Check proof of work matches claimed amount
    if (UintToArith256(hash) > bnTarget)
        return false;

    return true;
}

src/kernel/chainparams.cpp

        consensus.nPowTargetSpacing = 5 * 60; / 5m
        consensus.lwmaAveragingWindow = 576;  / 48 hours by block time. 5* 60 * 576 

you can also see this consensus.nMinerConfirmationWindow = 2016; // nPowTargetTimespan / nPowTargetSpacing ( but as we not use at our retargeting algo param we can ignore "nPowTargetSpacing" even remove it from code if we will not use anyting that used it.)

src/chain.h

static constexpr int64_t MAX_FUTURE_BLOCK_TIME = 432 * 20  // Block time 300 * N param 576 / 20 = 8 640 /  20 to get 432 then turn to 432 * 20 for use "*"

/**
 * Timestamp window used as a grace period by code that compares external
 * timestamps (such as timestamps passed to RPCs, or wallet key creation times)
 * to block timestamps. This should be set at least as high as
 * MAX_FUTURE_BLOCK_TIME.
 */
static constexpr int64_t TIMESTAMP_WINDOW = MAX_FUTURE_BLOCK_TIME;

/**
 * Maximum gap between node time and block time used
 * for the "Catching up..." mode in GUI.
 *
 * Ref: https://github.com/bitcoin/bitcoin/pull/1026
 */
static constexpr int64_t MAX_BLOCK_TIME_GAP = 300 * 12 // Block time 300 * 12 = 3600

src/timedata.h

static const int64_t DEFAULT_MAX_TIME_ADJUSTMENT = 72 * 60; // this is problematic by rule  MAX_FUTURE_BLOCK_TIME / 2; we get even more tha default btc 70 * 60 for 10m block   8640 / 2 =   4320 / 60 = 72``

src/net_processing.cpp

/** How frequently to check for stale tips */
static constexpr auto STALE_CHECK_INTERVAL{5min}; // btc is 10m for 10m blcok.
this  params is huse qustion need it adapt or not.

/// Age after which a stale block will no longer be served if requested as
/// protection against fingerprinting. Set to one month, denominated in seconds.
static constexpr int STALE_RELAY_AGE_LIMIT = 30 * 24 * 60 * 60;
/// Age after which a block is considered historical for purposes of rate
/// limiting block relay. Set to one week, denominated in seconds.
static constexpr int HISTORICAL_BLOCK_AGE = 7 * 24 * 60 * 60;

src/headerssync.cpp or use bitcoin script for it for get values.

//! Store one header commitment per HEADER_COMMITMENT_PERIOD blocks.
constexpr size_t HEADER_COMMITMENT_PERIOD{330}; // value / 2 ?

//! Only feed headers to validation once this many headers on top have been
//! received and validated against commitments.
constexpr size_t REDOWNLOAD_BUFFER_SIZE{7221}; // 14441/606 = ~23.8 commitments // value / 2 ?

Awesome now you are on track. @zawy12 provides the algorithm snippet but the implementation is entirely on the dev. You can always ask zawy if he would like to partake in tests and experimental tweaks. Was very open when i was implementing in 2018-2019

cryptforall commented 7 months ago

The mimblewimble implementation of WTEMA is quite simply expressed by their actual code -- consensus.rs line 389-390:

let next_diff = last_diff *
                WTEMA_HALF_LIFE / (WTEMA_HALF_LIFE - BLOCK_TIME_SEC + last_block_time);

WTEMA_HALF_LIFE is a constant with time units which makes the expression easiest to understand. It's not exactly a half-life numerically, but a "characteristic decay time".

mimblewimble has a 5-minute future time limit.

It's essentially the same thing here.

someunknownman commented 7 months ago

i stil not really get

K = Kp + Kp*t/T/N - Kp/N

K= current target Kp = prior target t = solvetime of Kp block (it's timestamp minus the timestamp before it)
T = block time N = 100 to 300, half of LWMA to get the same response speed.

t you mean previus block and pre-prevous block ? as example if last block is 100 then we use 99 and 98 blocks ? K = currect target , current target of last block and current target what we calc as next diff?

what algo is for "N" there ? or can be used from LWMA ? and you mean only replace formula and add some misong varibales ? like KP?

    // Define a k that will be used to get a proper average after weighting the solvetimes.
    const int64_t k = N * (N + 1) * T / 2; 

I’ve probably re-read it about 50 times already and even tried to ask the Copilot and GPT4 ( model the same by result what they genereate ) chat and ask a probable example of implementation and apparently he didn’t understand anything either judging by the nonsense he generated

 unsigned int WtemaCalculateNextWorkRequired(const CBlockIndex* pindexLast, const Consensus::Params& params)
{
    const int64_t T = params.nPowTargetSpacing;
    const int64_t N = params.wtemaAveragingWindow;
    const int64_t height = pindexLast->nHeight;
    const arith_uint256 powLimit = UintToArith256(params.powLimit);

    if (params.fPowAllowMinDifficultyBlocks) { return powLimit.GetCompact(); }

    arith_uint256 Kp, K;
    int64_t t;
    int64_t previousTimestamp;

    const CBlockIndex* blockPreviousTimestamp = pindexLast->GetAncestor(height - 1);
    previousTimestamp = blockPreviousTimestamp->GetBlockTime();

    const CBlockIndex* block = pindexLast;

    t = (block->GetBlockTime() > previousTimestamp) ? 
                        block->GetBlockTime() - previousTimestamp : 1;

    Kp.SetCompact(blockPreviousTimestamp->nBits);
    K = Kp + Kp * t / T / N - Kp / N;

    if (K > powLimit) { K = powLimit; }

    return K.GetCompact();
}

in short, I didn’t quite understand what we are counting in some variables

zawy12 commented 7 months ago

In terms of LWMA, WTEMA's "N" will give the same speed of response and stability as LWMA with 2x that N. So where I said 200 to 600 for LWMA, you would use 100 to 300. Yes, if height is H then timestamp of H block minus timestamp of H-1 block is the solvetime t and target Kp is the target of the H block. K is the target of the block the miner is currently working on. As far as I can tell, your code is correct. However, you have to change BTC code to enforce monotonic (sequential) timestamps which might be as simple as changing the Median of past 11 timestamps (MTP = 11) to MTP = 1 but this is a major change to BTC that might adversely affect code or assumptions in remote locations since BTC. The alternative is to modify your code like I've done below to enforce monotonic timestamps inside the difficulty algorithm. It assumes your MTP is 11 like bitcoin. I have not checked it. I've been wanting someone to do this for a long time because I like it better than LWMA and ASERT's complexities.

unsigned int WtemaCalculateNextWorkRequired(const CBlockIndex* pindexLast, const Consensus::Params& params)
{
  const int64_t T = params.nPowTargetSpacing; // "target block time"
  const int64_t N = params.wtemaAveragingWindow; // "mean lifetime" in blocks to correct the observed error which is t/T-1.
  const int64_t height = pindexLast->nHeight;
  const arith_uint256 powLimit = UintToArith256(params.powLimit);

  if (params.fPowAllowMinDifficultyBlocks) { return powLimit.GetCompact(); }

  arith_uint256 K, Kp; //  targets of blocks at height and height - 1.
  int64_t t; // t = solvetime of block at height.

  // Loop through the 10 most recent blocks before the current height to make sure 
  // a prior timestamp was not larger than the timestamp at height.
  // This assume the median of time past (MTP) is 11 as in Bitcoin.
  // This is a safe way to prevent negative solvetimes which would enable a catastrophic exploit.
  // Do not simply attempt "if (solvetime < 1) {solvetime=1;}"

 int64_t previousTimestamp = 1;
 for (int64_t i = height - 11; i < height; i++) {  
      const CBlockIndex* block = pindexLast->GetAncestor( i );
       previousTimestamp = max (  previousTimestamp,  block->GetBlockTime() ); 
   }
   const CBlockIndex* block = pindexLast->GetAncestor( height);
   t =  max(1, block->GetBlockTime() - previousTimestamp) ; // enforce monotonic timestamps

  // Calculate next target using WTEMA approximation of relative ASERT
   Kp.SetCompact(blockPreviousTimestamp->nBits);
   K = Kp + Kp * t / T / N - Kp / N; 

   if (K > powLimit) { K = powLimit; }

   return K.GetCompact();
}
someunknownman commented 7 months ago

ask about algo etc there , for not make trash at LWMA issue..

cryptforall commented 7 months ago

ask about algo etc there , for not make trash at LWMA issue..

Yah why not stick with one algo or create a hybrid?

someunknownman commented 7 months ago

ask about algo etc there , for not make trash at LWMA issue..

Yah why not stick with one algo or create a hybrid?

How i say that that is Experemental , the same we can say why create many chains and not use Bitcoin :)

Yespower is good to pin users to CPU - but have problem at plaform support.

Agron2id is supported by many pralfrom and again you say about argon2 , at project will be use "argon2id" and not native

// CBlockHeader::GetArgon2idPoWHash() instance
// -> Serialize Block Header using CDataStream
// -> Compute SHA-512 hash of serialized data (Two Rounds)
// -> Use the computed hash as the salt for argon2id_hash_raw function for the first round
// -> Call argon2id_hash_raw function for the first round using the serialized data as password and SHA-512 hash as salt
// -> Use the hash obtained from the first round as the salt for the second round
// -> Call argon2id_hash_raw function for the second round using the serialized data as password and the hash from the first round as salt
// -> Return the hash computed in the second round (hash2)

Yespower there like botle neck for prevent hash rate floations , there ~ mining speed even at Powlimit is 5M , so even with giveawys blocks for LWMA 577 there will be no problems at start.

memory floation from 4m to 32m at argon i make special , that very slow down any possible implementation of GPU/FPGA miners , but in another way not slow down CPU's and make possible fast check at any custom wallets ( mobile wallets etc.. )

in theory 2 algos alow us to create some like RAM coin. Where we will need always more RAM as retargeting due argon aka HDD coisn but RAM and 2 algo fix problem mobile devices for SPV wallets as if we use at first as example 20GB ram , we cannot do the same at some mobile device or lite node. so we can use 2 way for verify as extra POW.

cryptforall commented 7 months ago

ask about algo etc there , for not make trash at LWMA issue..

Yah why not stick with one algo or create a hybrid?

How i say that that is Experemental , the same we can say why create many chains and not use Bitcoin :)

Yespower is good to pin users to CPU - but have problem at plaform support.

Agron2id is supported by many pralfrom and again you say about argon2 , at project will be use "argon2id" and not native

// CBlockHeader::GetArgon2idPoWHash() instance
// -> Serialize Block Header using CDataStream
// -> Compute SHA-512 hash of serialized data (Two Rounds)
// -> Use the computed hash as the salt for argon2id_hash_raw function for the first round
// -> Call argon2id_hash_raw function for the first round using the serialized data as password and SHA-512 hash as salt
// -> Use the hash obtained from the first round as the salt for the second round
// -> Call argon2id_hash_raw function for the second round using the serialized data as password and the hash from the first round as salt
// -> Return the hash computed in the second round (hash2)

Yespower there like botle neck for prevent hash rate floations , there ~ mining speed even at Powlimit is 5M , so even with giveawys blocks for LWMA 577 there will be no problems at start.

memory floation from 4m to 32m at argon i make special , that very slow down any possible implementation of GPU/FPGA miners , but in another way not slow down CPU's and make possible fast check at any custom wallets ( mobile wallets etc.. )

From what i see, Yespower is incomplete as per development and has one group of miners. As for argonid2 not much. How will you collect data? Your own mining rig and will that be enough?

someunknownman commented 7 months ago

ask about algo etc there , for not make trash at LWMA issue..

Yah why not stick with one algo or create a hybrid?

How i say that that is Experemental , the same we can say why create many chains and not use Bitcoin :) Yespower is good to pin users to CPU - but have problem at plaform support. Agron2id is supported by many pralfrom and again you say about argon2 , at project will be use "argon2id" and not native

// CBlockHeader::GetArgon2idPoWHash() instance
// -> Serialize Block Header using CDataStream
// -> Compute SHA-512 hash of serialized data (Two Rounds)
// -> Use the computed hash as the salt for argon2id_hash_raw function for the first round
// -> Call argon2id_hash_raw function for the first round using the serialized data as password and SHA-512 hash as salt
// -> Use the hash obtained from the first round as the salt for the second round
// -> Call argon2id_hash_raw function for the second round using the serialized data as password and the hash from the first round as salt
// -> Return the hash computed in the second round (hash2)

Yespower there like botle neck for prevent hash rate floations , there ~ mining speed even at Powlimit is 5M , so even with giveawys blocks for LWMA 577 there will be no problems at start. memory floation from 4m to 32m at argon i make special , that very slow down any possible implementation of GPU/FPGA miners , but in another way not slow down CPU's and make possible fast check at any custom wallets ( mobile wallets etc.. )

From what i see, Yespower is incomplete as per development and has one group of miners. As for argonid2 not much. How will you collect data? Your own mining rig and will that be enough?

what you mean "collect data?" i will me the same how all users by CPU ,

ask about algo etc there , for not make trash at LWMA issue..

Yah why not stick with one algo or create a hybrid?

How i say that that is Experemental , the same we can say why create many chains and not use Bitcoin :) Yespower is good to pin users to CPU - but have problem at plaform support. Agron2id is supported by many pralfrom and again you say about argon2 , at project will be use "argon2id" and not native

// CBlockHeader::GetArgon2idPoWHash() instance
// -> Serialize Block Header using CDataStream
// -> Compute SHA-512 hash of serialized data (Two Rounds)
// -> Use the computed hash as the salt for argon2id_hash_raw function for the first round
// -> Call argon2id_hash_raw function for the first round using the serialized data as password and SHA-512 hash as salt
// -> Use the hash obtained from the first round as the salt for the second round
// -> Call argon2id_hash_raw function for the second round using the serialized data as password and the hash from the first round as salt
// -> Return the hash computed in the second round (hash2)

Yespower there like botle neck for prevent hash rate floations , there ~ mining speed even at Powlimit is 5M , so even with giveawys blocks for LWMA 577 there will be no problems at start. memory floation from 4m to 32m at argon i make special , that very slow down any possible implementation of GPU/FPGA miners , but in another way not slow down CPU's and make possible fast check at any custom wallets ( mobile wallets etc.. )

From what i see, Yespower is incomplete as per development and has one group of miners. As for argonid2 not much. How will you collect data? Your own mining rig and will that be enough?

mining will be "generatetoaddress" at wallet first time , then possible if project will have fans , some who create mining software.

it is posible always put generatetoadress -1 "address" -1 and mine forever.

No mining rigs the same as all users regulare mining , release will be at genesis block + 4 blocks include genesis but that most be no counted so 3 ( for fixate BIP rules at 2 block) How i say at project no planed any premine at any way. ( dev's fee , mining first low diff blocks , or pin to some height more reward - that is all pre-mine no matter how name it.)

base is 26.1 source code from BTC , moss ass pain there is fix tests for new mining logic, i know no who not fix them and make releases as is. But that people use old version with CVE from "generate coin for free sites" :)

yespower there need to be only at full nodes if they expected mining , Lite wallets can use only argon2id what is simple implement by logic buildet from native libs what exist almost at every lang.

and this why i was ask about LWMA and possible replaces for it , as it most be simple port to any lang with non math problems or some like that.

or maybe i wrong undestand you qustion

cryptforall commented 7 months ago

ask about algo etc there , for not make trash at LWMA issue..

Yah why not stick with one algo or create a hybrid?

How i say that that is Experemental , the same we can say why create many chains and not use Bitcoin :) Yespower is good to pin users to CPU - but have problem at plaform support. Agron2id is supported by many pralfrom and again you say about argon2 , at project will be use "argon2id" and not native

// CBlockHeader::GetArgon2idPoWHash() instance
// -> Serialize Block Header using CDataStream
// -> Compute SHA-512 hash of serialized data (Two Rounds)
// -> Use the computed hash as the salt for argon2id_hash_raw function for the first round
// -> Call argon2id_hash_raw function for the first round using the serialized data as password and SHA-512 hash as salt
// -> Use the hash obtained from the first round as the salt for the second round
// -> Call argon2id_hash_raw function for the second round using the serialized data as password and the hash from the first round as salt
// -> Return the hash computed in the second round (hash2)

Yespower there like botle neck for prevent hash rate floations , there ~ mining speed even at Powlimit is 5M , so even with giveawys blocks for LWMA 577 there will be no problems at start. memory floation from 4m to 32m at argon i make special , that very slow down any possible implementation of GPU/FPGA miners , but in another way not slow down CPU's and make possible fast check at any custom wallets ( mobile wallets etc.. )

From what i see, Yespower is incomplete as per development and has one group of miners. As for argonid2 not much. How will you collect data? Your own mining rig and will that be enough?

what you mean "collect data?" i will me the same how all users by CPU ,

ask about algo etc there , for not make trash at LWMA issue..

Yah why not stick with one algo or create a hybrid?

How i say that that is Experemental , the same we can say why create many chains and not use Bitcoin :) Yespower is good to pin users to CPU - but have problem at plaform support. Agron2id is supported by many pralfrom and again you say about argon2 , at project will be use "argon2id" and not native

// CBlockHeader::GetArgon2idPoWHash() instance
// -> Serialize Block Header using CDataStream
// -> Compute SHA-512 hash of serialized data (Two Rounds)
// -> Use the computed hash as the salt for argon2id_hash_raw function for the first round
// -> Call argon2id_hash_raw function for the first round using the serialized data as password and SHA-512 hash as salt
// -> Use the hash obtained from the first round as the salt for the second round
// -> Call argon2id_hash_raw function for the second round using the serialized data as password and the hash from the first round as salt
// -> Return the hash computed in the second round (hash2)

Yespower there like botle neck for prevent hash rate floations , there ~ mining speed even at Powlimit is 5M , so even with giveawys blocks for LWMA 577 there will be no problems at start. memory floation from 4m to 32m at argon i make special , that very slow down any possible implementation of GPU/FPGA miners , but in another way not slow down CPU's and make possible fast check at any custom wallets ( mobile wallets etc.. )

From what i see, Yespower is incomplete as per development and has one group of miners. As for argonid2 not much. How will you collect data? Your own mining rig and will that be enough?

mining will be "generatetoaddress" at wallet first time , then possible if project will have fans , some who create mining software.

it is posible always put generatetoadress -1 "address" -1 and mine forever.

No mining rigs the same as all users regulare mining , release will be at genesis block + 4 blocks include genesis but that most be no counted so 3 ( for fixate BIP rules at 2 block) How i say at project no planed any premine at any way. ( dev's fee , mining first low diff blocks , or pin to some height more reward - that is all pre-mine no matter how name it.)

base is 26.1 source code from BTC , moss ass pain there is fix tests for new mining logic, i know no who not fix them and make releases as is. But that people use old version with CVE from "generate coin for free sites" :)

yespower there need to be only at full nodes if they expected mining , Lite wallets can use only argon2id what is simple implement by logic buildet from native libs what exist almost at every lang.

and this why i was ask about LWMA and possible replaces for it , as it most be simple port to any lang with non math problems or some like that.

or maybe i wrong undestand you qustion

Lwma will be your retargeting, so it will only add on to the coin strength. I like the enthusiasm and hope it Works out great for you!