nanocurrency / nano-node

Nano is digital currency. Its ticker is: XNO and its currency symbol is: Ӿ
https://nano.org
BSD 3-Clause "New" or "Revised" License
3.47k stars 786 forks source link

Proposal: Dynamic PoW difficulty based on network TX per second #196

Closed SergiySW closed 1 year ago

SergiySW commented 7 years ago

Current user-generated PoW is too high for large services & mobile users. Also it cannot fully prevent large-scale TX attack. With dynamic PoW difficulty representative nodes can increase required block threshold under spam attack attempt & keep it easy in usual situations.

HostFat commented 6 years ago

I'm not sure that it is easy to understand when there is a spam attack and when there are just many new users. Something dynamic can be added, but I think that the main PoW difficulty should be changed with stake voting.

thehen101 commented 6 years ago

This is a good idea, but, it still could be abused. Someone could just prepare a huge cache of thousands of transactions, and release them slowly over the course of a day or so. This means for that day the Raiblocks network would be unusable as the dynamic threshold will be so high that no-one will be able to keep up. Transactions would take minutes to generate the POW for - so at that point people would just not use XRB as the transactions would no longer be instant.

Suggestion: the servers which the lite phone wallets connect to can compute the PoW for them but for a small fee. This means instant transactions for anyone on a phone, BUT it may drive a few mobile users away.

Perhaps the network could also vote to reject obvious malicious transactions that are being spammed around the network (to stop a bot playing ping pong with 2 addresses and some XRB). Perhaps malicious IPs could also be blocked by the network temporarily.

samholmes commented 6 years ago

Without a dynamic PoW difficulty, an attacker could perform a precompiled PoW. We need a good solution for this type of attack vector. The whitepaper mentions this attack vector without any resolution but only proposals.

Transaction-rate limiting on a single account could be effective, but then the attacker could then create multiple accounts to bypass the rate limit. Transaction-rate limiting on all accounts would be an alternative solution, but isn't that the point of the PoW? If we limit the transaction rates on all accounts in the network, we could do away with PoW, but this wouldn't be a good solution.

The way I see it, spam is a problem regarding overall network bandwidth. PoW should be used to limit the bandwidth using hash power. But the ratio of hash power to bandwidth throughput is an ever changing ratio. The only way to adapt to fluctuations in this ratio is to implement a form of dynamic PoW difficulty. However, as @thehen101 mentioned, this could be abused.

I see that stake should come into play to make it more expensive to abuse the network. If we rely only on PoW, then greater hash power wins in the network. So stake should be a resource to increase difficulty. In conclusion, I propose: the smaller the account balance the more hash power is required in the dynamic PoW difficulty. This means that 0 XRB transactions would require the utmost amount of PoW, while transactions with larger units of XRB would require less PoW. With this in place, to conduct an effective Precompiled PoW attack you would need a greater amount of accounts containing significant amount of XRB balances. This would cost the attacker computing power in addition to cost of stake in the network.

DavidCMoncrief commented 6 years ago

@SamHolmes . I have a philosophical and practical issue with the POW being linked to your stake. Many users of Rai are likely to be from developing countries and may not have much XRB to begin with, and would likely use a mobile phone for transactions. I've thought about this a bit and maybe its possible in the future for the POW to be done by the point of sale machine..

There needs to be a way to identify network cloggers and put them on a shot clock or increase POW. IOTA is dealing with this now and CFB stated something to the degree of "we know how to turn it off, we're just collecting data so we better understand the attack".

@thehen101 I do not think paying a fee for fast laning the transaction or even a fee to be able to use the mobile wallet should be implemented. That is what makes rai fundamentally different (and IMHO better) than all other crytocurrencies. While I guess there is nothing stopping a third party from providing this service, I'm not sure if it would be responded to well.

androm3da commented 6 years ago

The dynamic PoW sounds like a reasonable idea but it has to be something that all nodes can perceive in a stable way so that transactions can operate seamlessly. How can all nodes calculate the difficulty if "tx rate" is an input? They cannot. Some nodes will see different rates based on global network topology and activity levels. Router congestion will cause datagrams to be lost, so consensus on the tx rate could be formed.

Accepting a transaction fee of any size would effectively renegotiate the value proposition of the coin. This would be similar to Bitcoin deciding that they no longer want to have an inflation limit or they want to increase the inflation rate.

scott79 commented 6 years ago

If voting occurs on all blocks, can rate limiting the # of txs voted on for an account based on value held in account be a solution. Measurement can be done by implementing a block window size for rate detection. Voting rate should be dynamic and can/should be proportional to account's total available balance, total available supply, block rate activity, and other criterion.

Txs in breach are stored in a LRU queue which can result in lost of vote or simply delayed voting.

guilhermelawless commented 6 years ago

@scott79 creating multiple accounts avoids that.

AyiSoli commented 6 years ago

With increasing the PoW, the effect of it must be assessed from a allied service operators perspective - how will wallet providers, exchanges, deal with it? What could be the impact? Its safe to say majority of the users of the network will rely upon subsidised PoW servers. The value of the transaction should also be a factor, under a spam or high tps scenario, pow for a low value transaction should me higher than the pow for a high value transaction.

P0ar commented 6 years ago

I think it's worth leaving here what colin said in the closed issue above:

We have plans for a queueing mechanism weighted on (least recently used account) (amount) (PoW difficulty). Nodes will be able to dynamically increase the PoW difficulty they generate to get prioritization.

Laurentiu-Andronache commented 6 years ago

@samholmes has given the solution

the smaller the account balance the more hash power is required in the dynamic PoW difficulty. This means that 0 XRB transactions would require the utmost amount of PoW, while transactions with larger units of XRB would require less PoW. With this in place, to conduct an effective Precompiled PoW attack you would need a greater amount of accounts containing significant amount of XRB balances. This would cost the attacker computing power in addition to cost of stake in the network.

It's perfect really!

samholmes commented 6 years ago

@Laurentiu-Andronache The only downside as mentioned by @DavidCMoncrief is that poorer economies (countries) would be at a disadvantage when using the currency. Not only would those people have less money and therefore smaller transactions, they would also have less resources when it comes to computing power/hash power. So the problem has to do with the non-uniformity of capital across the globe.

I think this suggests that there should be more localized sub-networks within the whole network. Transactions within a poorer network don't concern themselves with transactions within a richer network usually. If there was a way to identify sub-networks and determine the required hash rate for transactions, then we would have a more asynchronous network overall. The only challenge is when these sub-networks need to interact: in that case requiring more PoW for transactions from the poorer sub-net to the richer sub-net is an understandable expense. There would still be the issue that these poorer sub-networks would be less secure when it comes to spam, but would there be much economic incentive to spam such a poorer sub-network? Maybe not. This is just another idea I'll though out there.

kvrban commented 6 years ago

On Reddit was a discussion about this topic to dynamically adjust the PoW difficulty. Maybe some stimuli to find: https://www.reddit.com/r/nanocurrency/comments/8y0ljp/what_is_the_incentive_to_spam_the_nano_network/e279oy7

My comment was: that any type of account based PoW adjustment could only work, if opening new accounts is much more PoW expensive then normal transfer transactions. Otherwise a attacker just spam with many new opened accounts. High PoW cost to open a new accounts hit spammer much harder, not normal Nano user, which use their accounts long-term.

On Tue, Aug 14, 2018 at 11:09 PM samholmes notifications@github.com wrote:

@Laurentiu-Andronache https://github.com/Laurentiu-Andronache The only downside as mentioned by @DavidCMoncrief https://github.com/DavidCMoncrief is that poorer economies (countries) would be at a disadvantage when using the currency. Not only would those people have less money and therefore smaller transactions, they would also have less resources when it comes to computing power/hash power. So the problem has to do with the non-uniformity of capital across the globe.

I think this suggests that there should be more localized sub-networks within the whole network. Transactions within a poorer network don't concern themselves with transactions within a richer network usually. If there was a way to identify sub-networks and determine the required hash rate for transactions, then we would have a more asynchronous network overall. The only challenge is when these sub-networks need to interact: in that case requiring more PoW for transactions from the poorer sub-net to the richer sub-net is an understandable expense. There would still be the issue that these poorer sub-networks would be less secure when it comes to spam, but would there be much economic incentive to spam such a poorer sub-network? Maybe not. This is just another idea I'll though out there.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/nanocurrency/raiblocks/issues/196#issuecomment-413017024, or mute the thread https://github.com/notifications/unsubscribe-auth/Afh3xASoGH5Jsu03dxrgs9uMBodfvA-fks5uQzyZgaJpZM4PqZYS .

guilhermelawless commented 6 years ago

@kvrban The problem is that it also hits use-cases such as payment processors, who create an account for each transaction. At least, Brainblocks does.

kvrban commented 6 years ago

@kvrban https://github.com/kvrban The problem is that it also hits use-cases such as payment processors, who create an account for each transaction. At least, Brainblocks does.

Payment processors can reuse accounts. There is no absoluteness that a account can be used only once.

On Wed, Aug 15, 2018 at 12:17 AM Guilherme Lawless notifications@github.com wrote:

@kvrban https://github.com/kvrban The problem is that it also hits use-cases such as payment processors, who create an account for each transaction. At least, Brainblocks does.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/nanocurrency/raiblocks/issues/196#issuecomment-413034099, or mute the thread https://github.com/notifications/unsubscribe-auth/Afh3xCNWPw7jhWpZ5smXOvIkIyE3a4Roks5uQ0x6gaJpZM4PqZYS .

tiagocanto01 commented 6 years ago

My idea is: each account would have its own POW that would be calculated with 2 variables:

  1. Higher number of transactions results in higher POW (It would need to be implemented timestamps in each block).
  2. Higher balance on the account results in lower POW. It would be good to exchanges and applications that doesn't need to pay expensive fees for high volume. On the other hand, someone who wants to attack the network will not have an account with high amount of nano. People who have a little nano and aren't spammers don't need to do a lot of transactions and they would not have a high POW because of variable 1.

Unlike what many people said, we should not consider the value of the transaction, because it would make micro payments unfeasible.

I also think that receive block doesn't need POW.

inkeliz commented 6 years ago

I think we need to fight against pre-computed PoW, not increasing the PoW at first time. Even if the dinamyc PoW the attacker can compute a PoW that meets the new requirements. For instance, if today is hash < 100, the attacker can already pre-compute a hash < 80, if the difficult rises to< 90, the attacker already have it.

In my point of view what prevents a pre-computed PoW is a timestamp-like. The signature is chaotic and not predicable, but verifiable.

My idea is something like, but not exactly:

  1. All > 0.1% voters chooses a difficult and combine with a rounded unix-timestamp and the sequence (if change votes are needed), then signs: Ed25516(difficult || unix-time || sequence).
  2. When the nodes agree with the values, the node need to create a pow using all signatures as part of the derivation. So, something like Blake2(size = 8, message = Nonce || AllVotersSignatures || previous)

In this case, the only way to pre-compute the PoW is predicting the signature of all nodes, which is not possible. The main problem is getting a consensus, because someone desync can consider that have all signature, but not. However, it’s the idea.

If I understand the VoteStapling (https://github.com/nanocurrency/raiblocks/pull/1006), it can be used in the same way. If this vote has a lifetime to be approved, so the PoW includes the VoteStapling in the derivation.

rkeene commented 5 years ago

As a follow-on to #1298 we will be investigating changing the floor (work minimum) dynamically based on network conditions -- transactions below the current floor won't get added to the queue created as part of #1298

zhyatt commented 5 years ago

Related to https://github.com/nanocurrency/nano-node/issues/1336

gabparrot commented 5 years ago

Why not have an algorithm with multiple factors instead of trying to find a 1 level solution? Now, maybe this isn't feasible, as I'm not as knowledgeable as you all but instead of just comparing PoW done for a block vs present PoW floor, couldn't it be individually, score based, while still being dynamic?

Example:

Most of this calculation could be done locally without the need for more bandwidth. At least I think.

This would open the door for an even smaller standard PoW.

Just curious

(user#8804)

zhyatt commented 5 years ago

Just a note to clarify that Dynamic PoW is being implemented in V19 through various PRs, some of which can be seen in https://github.com/nanocurrency/nano-node/issues/1336. This issue was kept open and moved to the Research for Future Release milestone because there were discussions and suggestions for other types of PoW changes we wanted to keep alive and in consideration going forward.

paulharwood commented 3 years ago

Can the node/network also reject transactions with a threshold of say less than 90% of the current difficulty? Not reaching a difficulty threshold may render pre-computed attacks useless and protect the tx from ever being rebroadcast, effectively localising the problem.

Does the difficulty need to be uniform over the network?

Excuse my ignorance if i've not understood the protocol well enough for these questions to be valid.

zhyatt commented 3 years ago

Can the node/network also reject transactions with a threshold of say less than 90% of the current difficulty? Not reaching a difficulty threshold may render pre-computed attacks useless and protect the tx from ever being rebroadcast, effectively localising the problem.

Does the difficulty need to be uniform over the network?

Excuse my ignorance if i've not understood the protocol well enough for these questions to be valid.

One of the issues with this approach is that due to the asynchronous nature of the network, nodes could be seeing a different set/order of transactions in any given time period. Since work difficulty can be changed on a block before confirmation, that also means potentially different levels seen for an individual block at different times. This isn't a major issue when those transactions are kept around and eventually resolved, but if you start dropping them at a threshold, then different nodes across the network may not have the same set of blocks they are trying to reach confirmation on. This misalignment slows down the ability to clear elections, requires bootstrapping dropped blocks that were prioritized on some nodes but dropped on others, and other effects that reduce efficiency.

There have been discussions of trying to get the Principal Representatives to agree on certain common levels, this type of difficulty threshold could be a candidate for that, but in order to come to agreement you need more communication/negotiation/consensus which is costly. This could still be worth it given the potential throttle this could provide under load but before the network is overwhelmed, and then when not under load the extra bandwidth and resource between PRs isn't as concerning.

RobbertJanSW commented 3 years ago

[edit2] I just noticed we already have something like this implemented by now.. :-) [/edit2]

How about adjusting the difficulty based on the blocks reaching quorum at any given moment? Without extra negotiation, but simply based upon the current network quorum and block difficulty already in there.

In the end, it's all about the PR nodes being able to vote. Spam is only a problem when it is saturating (too much) PR nodes, impacting the ability to reach quorum on the network.

A PR node can independently determine whether it is healthy or lagging behind. Once it starts lagging behind (unchecked blocks), it can adjust it's own difficulty requirement for new blocks to be processed / voted upon. The more it lags behind, the higher it will adjust it's difficulty threshold. When every PR does this for itself, each PR would start throttling the processing of incoming blocks when it becomes overwhelmed. In such a way that it only processes the blocks coming in with the highest difficulty.

Now consider multiple PRs start throttling this way as a result of a spam attack. Blocks reaching quorum would have a difficulty that's consistent with a load that can be handled by PRs representing >50% of the voting weight under the current TX load.

So any node on the network can create new TXs based upon the difficulty in the last 'n' blocks reaching quorum.

Naturally, difficulty has to be able to ease back down. So nodes would have to create new blocks based upon lowest quorum-reached block, resulting in all new blocks to have a difficulty ranging from a littlebit lower, to some range above, that "quorum difficulty".

Example, when a node creates a new TX/block: difficultyNetworkTarget = lowest difficulty of all blocks reaching quorum in the last second difficultyNewBlock = difficultyNetworkTarget + (rand ( -1, 1000) )

Values are just exemplary, of course. But the idea is nodes start creating blocks in a difficulty range ranging from 0.1% (a little) below what the network was able to handle in the last second, and 99.9% (a little) above it. With a set minimum, of course. So when the network is not overloaded, all block would be created with a random difficulty at or just above the very lowest difficulty. When it starts being overloaded, blocks would be generated within a difficulty range that is climing.

Once a spam attack starts, PRs would keep themselves healhy. Delaying only the TXs/blocks having the lowest difficulty until the attack is over. Nodes will in turn start creating blocks with a difficulty the network can handle. Once spam stops, PRs over time process the blocks with lower difficulty, lowering the quorum-reaching difficulty. In turn, the difficuly range within newly created blocks will start to ease back down.

On top of all this, the difficulty should always have a block-based (account based?) weight factor of some kind. Requiring very small TXs sending just-received coins to have a higher difficulty. So when speaking of quorum-reaching difficulty, I mean "effective difficulty". Being the actual difficulty in a block minus a penalty. Penalty being 0 or higher, based upon factors like tiagocanto01 proposed in aug 2018.

Consider in all this that when I say 'difficulty', this can be implemented as any form of difficulty (POW/POS). Lets not assume we need POW here.

[edit] In fact, since all newly created blocks in this concept would already have slightly different difficulty falling inside the 'load range' of the last second, PRs dont even have to adjust their difficulty actively; they only have to process blocks prioritized by block difficulty.

All this should be combined with the requirement to sign a block including some kind of proof it has been created in-time (recently). For this, please see inkeliz's comment earlier. This would prevent precompiled spam attacks with build-in higher block difficulty. Maybe it is worth to consider dropping blocks created with a way higher difficulty than the current quorum-reaching difficulty. That would throttle a precompiled spa, attack in itself, since the pace of difficulty adjustments is difficult to predict. [/edit]

qwahzi commented 1 year ago

Closing for now, since dynamic PoW was implemented in V19 and then later removed for balance + LRU prioritization: https://docs.nano.org/protocol-design/spam-work-and-prioritization/