nochowderforyou / clams

Clam Project
MIT License
62 stars 58 forks source link

Dynamic moving block size #221

Open creativecuriosity opened 8 years ago

creativecuriosity commented 8 years ago

Re: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-May/008017.html

The recent debate inside of the BTC community, regarding blocksize adjustment, might be best solved in the forth-coming hard-fork for CLAM.

I personally favor some sort of network signaled dynamic system; as opposed to a static "kick-the-can" approach.

Further, our current specifications leave us open to long-term bloat attacks at a rate of x10 compared to BTC given our blocktime specification. Though, it should be mentioned, we are likewise 1/10th as vulnerable to DDoS attacks.

Regardless, some type of unified fee/blocksize/blocktime should be considered for the upcoming hard-fork. Assuming things go well for CLAM - it can only become more difficult in the future to solve such potential problems.

Connected concerns:

  1. DDoS vulnerability of a sub-demand transaction space.
  2. Bloat vulnerability of a supra-demand transaction space.
  3. DDoS vulnerability of a sub-fee byte cost.
  4. Bloat vulnerability of a sub-fee byte cost.
  5. Tragedy of the commons, given possible CLAMspeech application storage.
  6. Tragedy of the commons, given long-term marginal cost of transactions.

Is it possible to a create a system with competing incentive curves that will find equilibrium at a near-efficient solution?

Thoughts?

dooglus commented 8 years ago

I personally favor some sort of network signaled dynamic system; as opposed to a static "kick-the-can" approach.

I don't know how that would work. Are you suggesting that if we see a lot of almost-full blocks we increase the blocksize limit? That seems dangerous. At the moment we are limited by the 1 MB blocksize limit to having to store at most 1 MB per minute. If we dynamically increase the limit based on "demand", then there's no limit to how fast the blockchain can grow, making it possible for an attacker to exhaust our ability to store his transactions.

Note that we already have 10 times the capacity of Bitcoin (having the same blocksize limit, but ten times the target block frequency) and so even if we had the same transaction rate as Bitcoin we would still only be at 10% of capacity. This gives us time to see how things work out for Bitcoin before we need to act.

In general I don't think it's possible for a decentralised system to be "universally available" (#218: The CLAM network is universally accessible and shall remain universally accessible) without also being vulnerable to DoS attacks (DoS'ers are people too). The way to prevent DoS is to make it too expensive for the attacker to carry out his attack. In doing so you likely also make it too expensive for the poorest member of the community to use the system.

creativecuriosity commented 8 years ago

I don't know how that would work. Are you suggesting that if we see a lot of almost-full blocks we increase the blocksize limit?

In the beginning, given CLAMs relatively small transaction load, such a change would dynamically adjust downward. But, I don't think such a change can exist in a vacuum. It would have to be accompanied by other complimentary changes as well.


This got a bit long-winded and is likely not free from errors. My apologies.


Overall Concept

The only way a dynamic system works is if there are competing interests which inevitably reach an equilibrium.

In this case, you would simultaneously adjust the consensus required per-byte fee and the block size.

If, over the long-term window: averageActualBlockSize / currentMaxBlockSize > 0.75 We adjust the block size limit upward. We adjust the consensus per-byte fee upward.

Conversely, if averageActualBlockSize / currentMaxBlockSize < 0.25 We adjust the block size limit downward. We adjust the consensus per-byte fee downward.

The figures 0.75 and 0.25 are simply for illustration. Removing the upper and lower 20th percentile, similar to GMaxwell's proposal would also likely be wise.


Equilibrium

We can assume that a lower per-byte fee will tend to increase transaction load. Increased transaction load will tend to increase the block size limit.

We can assume that a higher per-byte fee will tend to decrease transaction load. Decreased transaction load will tend to decrease the block size limit.

Given this, there should a point of equilibrium at which there is fee market competition for block transaction space. We essentially end up with a cartesian plane with two crossing curves. Let the X axis be the minimum per-byte fee and block size - and the Y axis be the transaction load itself.

On this plane we can draw two curves: a demand curve and a supply curve. The demand curve represents the points of demand for transaction(load) at each fee-level. We can assume that as the per-byte marginal cost increases the demand for that space will fall.

The supply curve represents the points of supply for transaction(space) at each fee-level. Because we have designed the transaction space to positively correlate with the per-byte fee level, we can assume this curve represents the increased space available as the per-byte fee increases.

At any lateral line below where the two curves cross, we can see that there is an abundance of available "space" but a low demand for that space due to exorbitant fees - a.k.a. a surplus. This should cause blocks to go unfilled and result in a downward shift of the supply curve until equilibrium.

At any lateral line above where the two curves cross, we can see that there is a shortage of supply and a corresponding demand higher than the available supply due to extremely low fees. This should cause blocks to be filled, and result in an upward shift in the supply curve.

Equilibrium should be found at exactly the level at which the space available exactly equals the demand for that space at the given per-byte fee.


Elements:


Increased required/minimum per-byte fee

This is important. It makes consistent long-term faux transactions to bloat the chain or force up block size expensive. It also requires a reasonable cost per-byte, considering that staking nodes bear a marginal cost for operating the node. This cost would be especially important if CLAMspeech ever found traction as provider of state for applications. This would be akin to "fuel" in Ethereum.


Limited, but adjustable (over a long-term window) block size

The network already has a maximum individual transaction size, doesn't it?
I would suggest this be hard-coded as the floor of the block size limit.

Considering that we already have a throughput of 10x Bitcoin, I would suggest that the hard-coded ceiling of the block size limit remain at 1MB. If we figure our competing interests properly this shouldn't be "required"; but it is likely wise for sanity.


Replace-by-fee implementation

Re: https://github.com/petertodd/bitcoin/tree/replace-by-fee-v0.11.0 https://github.com/petertodd/replace-by-fee-tools

This would allow users to replace currently pending transactions (presumably waiting for confirmation due to congested blocks) with higher fee paying transactions. An important element if you intend to restrict the block size to near demand.


Sensible/automated replace-by-fee wallet functionality

Replace-by-fee has no value if wallets are not capable of utilizing the feature. The concept is deceptively simple: users define a fee range they are willing to pay. The client stores a mean or average of the fees paid by competing transactions in the mempool. When a transaction is created, the user selects a priority from a drop-down, which in turn sets the fee range. The client begins by broadcasting the transaction with a fee at the bottom of the range. On each re-broadcast, assuming it isn't into a block, the user's wallet increases the fee. This ensures that the user's transaction is competitive.


Reasonable default node settings to prioritize higher fee transactions

Staking nodes should implement the replace-by-fee architecture as well as prioritize transactions by per-byte fee paid.


Distributed Denial of Service and Bloat Attacks

During a distributed denial of service attack, the attacker would fill blocks in the network and the mem-pool with transactions. During a bloat attack, the attacker would attempt to weigh the chain down with useless transactions and other data, in order to make the chain, in the long-term, unsustainable/maintainable. Four dynamics of this system would be resilient to these attacks: