floblockchain / flo

The Flo Blockchain allows for 1080 bytes of arbitrary data storage on every transaction! FLO is an Archival Blockchain dedicated to long-term data storage and preservation. It offers a secure and decentralized platform for storing important information like public records, ensuring their integrity and accessibility over time.
MIT License
32 stars 21 forks source link

Increase Maximum utxo chain to 1250 (50x increase) #8

Closed OstlerDev closed 5 years ago

OstlerDev commented 5 years ago

In version 0.12.0 of Bitcoin Core, the config settings limitancestorcount and limitdescendantcount were added and set to a default of 25, thus limiting the maximum chain of unconfirmed transactions to only 25.

I believe there are many situations where a company or individual will want to place large volumes of information onto the chain using floData in a very short amount of time. This change allows them to do as they please, broadcasting and storing much larger volumes of information per block using only a single utxo chain.

Currently, if they wish to place more than 25 transactions into a single block, they must create many utxo inputs, and then create chains of 25 transactions off each of those inputs. This creates an extra burden on the company or individual wishing to use Flo that may dissuade them completely.

I propose that we increase the limit to 1250, as that will provide a 50x increase in throughput. Since there can only be ~900 transactions with 1040 byte floData within a single block, this allows users to completely fill a block, as well as have some wiggle room.

cchrysostom commented 5 years ago

How did you come to 1,250 ancestor transactions as default?

OstlerDev commented 5 years ago

@cchrysostom I decided to set the limit to 1250, as while ~900 tx's with 1040 bytes of floData would fill up a block, you can fit ~1250 tx's with ~800kb of floData into a single block.

bitspill commented 5 years ago

I'm not sure setting DEFAULT_*_SIZE_LIMIT = 5050; (5MB) would be the best choice since the block size is only 1MB we should try to keep the whole set within one block by setting a large DEFAULT_*_LIMIT as you did however keep DEFAULT_*_SIZE_LIMIT to around 900kb

OstlerDev commented 5 years ago

@bitspill I had just increased DEFAULT_*_SIZE_LIMIT by 50x as well, but we can absolutely set it to something lower.

What would you think about increasing DEFAULT_*_SIZE_LIMIT to 1.75MB? That way there is enough for 1250 full floData transactions (1040B 1250 = 1.3MB), along with room for inputs and outputs for said transactions (300B 1250 = ~0.4MB), as well as a little wiggle room for good measure.

bitspill commented 5 years ago

What would you think about increasing DEFAULT_*_SIZE_LIMIT to 1.75MB? That way there is enough for 1250 full floData transactions (1040B 1250 = 1.3MB), along with room for inputs and outputs for said transactions (300B 1250 = ~0.4MB), as well as a little wiggle room for good measure.

Would it be possible to test this on testnet? Similar to Joey's comment in Telegram I'm just concerned that the wallet/miner may be making assumptions about what can go into a block from the mempool and potentially going over the block size may introduce unintended consequences

OstlerDev commented 5 years ago

Would it be possible to test this on testnet?

Yes, I have modified several of my nodes already to reflect those settings. If you modify the pools you have access to on testnet as well, then we should be able to do a full test of the change before it goes live :)

OstlerDev commented 5 years ago

@bitspill I have been running tests with the values of DEFAULT_*_LIMIT=1250 and DEFAULT_*_SIZE_LIMIT=1750 (1.75MB) on Testnet, and have found those values to be successful and work well.

The most recent test was 10,000 transactions chained together (pausing when ancestor/descendent limit is reached). The test took ~15.5 minutes to run, and got confirmed into blocks 410237 to 410257 (20 blocks in total).

OstlerDev commented 5 years ago

I have just added a commit to my PR branch that modifies the size limit to 1.75MB to reflect the tests run.

OstlerDev commented 5 years ago

The initial limit was added in this MR: https://github.com/bitcoin/bitcoin/pull/6654 And then later lowered (before release I believe) in this MR: https://github.com/bitcoin/bitcoin/pull/6771

There is a decent amount of discussion on both MR's.

cchrysostom commented 5 years ago

In this discussion, we haven't discussed to two primary reasons the original author of 6654 wanted to reduce the transaction chain size. The first was essentially mempool clogging or spamming. The second was something he called, "relay fee boosting attack."

For the first issue, I would have to argue that FLO hasn't seen the mempool flooded with spam transactions. Oddly, our use case having a single publisher post thousands, millions of records in a single run would look like spam (lots of tx's spending from the same utxo). So, for right now, it isn't an issue and we have several use cases at MLG that would legitimately spend in that fashion.

To monitor this "mempool clogging" situation, we could improve our explorers to monitor for this type of spending, perhaps, keeping a log of long-unconfirmed-tx chains. Perhaps, we can highlight known addresses.

The second issue is a little harder for me to address. I haven't found a description of a relay fee boosting attack. I assume it is a consequence of spamming a mempool with little tx's which get relayed because of a low minimum-relay-fee setting. If this is the case, I'm thinking we should monitor that situation, too.

My thought about both these issues is that we go forward with the PR, but commit to come up with ways to easily monitor the mempool situation. Perhaps, something that clearly displays a count of long-transaction-chains and the addresses generating them. Also, we should recommend (or default) a decent mempool size if 300 MB is not enough.

trentlarson commented 5 years ago

True. It seems that this opens yet another door to mempool spamming, but it doesn't seem like the easiest to exploit for purposes of DoS... and I like other approaches rather than disallowing this use case. ( https://github.com/bitcoin/bitcoin/pull/6722 is one such approach that has been merged.)

cchrysostom commented 5 years ago

True. It seems that this opens yet another door to mempool spamming, but it doesn't seem like the easiest to exploit for purposes of DoS... and I like other approaches rather than disallowing this use case. ( bitcoin/bitcoin#6722 is one such approach that has been merged.)

Good find, @trentlarson . The behavior of removing minimum fee transactions when the mempool grows to its limit will slow down an attack. But, there is a chance that legitimate bulk loading transactions would get removed. We would have to ask application developers, we should keep an eye out for this debugging line output, "Removed %u txn, rolling minimum fee bumped to %s". Or, is there another way to detect a removed tx?

daviortega commented 5 years ago

I still need to wrap my mind around this... but I would be more comfortable if we do the following test:

Four or five of us try to put 10k tx at the same time on testnet and monitor as many things as we can to see what happens.

What do you guys think?

I am volunteering to be one of these nodes.

cchrysostom commented 5 years ago

A test on testnet is a good idea. I believe @OstlerDev has an easy-to-use script that can be passed around. Also, i think testnet has 4 miners going, now.

metacoin commented 5 years ago

My initial thoughts on this are that the changes in commit https://github.com/bitcoin/bitcoin/commit/971a4e6b86e5bd3ac33441d8c031d63d88c59a8b are overly conservative for FLO's case where many small transactions are needed for bulk metadata writing.

The original limits were similar to the numbers we are proposing now, particularly for descendants:

static const unsigned int DEFAULT_ANCESTOR_LIMIT = 100;
static const unsigned int DEFAULT_ANCESTOR_SIZE_LIMIT = 900;
static const unsigned int DEFAULT_DESCENDANT_LIMIT = 1000;
static const unsigned int DEFAULT_DESCENDANT_SIZE_LIMIT = 2500;

The above limits were active in bitcoin from July to October of 2015.

My main concern is the fact that CalculateMempoolAncestors() modifies entry and setAncestors. https://github.com/floblockchain/flo/blob/flo-master/src/txmempool.cpp#L158

This function is called by miner.cpp (miners selection of transactions), rbf.cpp (to determine replace-by-fee-state), and blockchain.cpp (to get a response for the getmempoolancestors RPC command), but it seems like changing the limits here would not have much of an effect on those limits.

Changing these limits warrants a discussion about the potential for abuse. The situation is the opposite of defending against a 51% attack. Mempool attackers must constantly spend FLO to DOS the network while honest nodes only need to spend a bit more than the attacker in a single block to get their transactions into the chain. In general, I think attacks on the mempool aren't much more likely due to this change, because anyone can fan out transaction outputs and spend them without creating an ancestor chain.

bitspill commented 5 years ago

Are there more tests we'd like to run or can these PRs be merged now?

@cchrysostom @metacoin @daviortega

cchrysostom commented 5 years ago

I think the recent testnet 60k transaction test demonstrated this PR's change does not harm current behavior regarding the mempool. I feel good about a merge.

OstlerDev commented 5 years ago

I also feel good about the test that was performed, so I would feel comfortable merging in the change.

metacoin commented 5 years ago

The tests we ran were handled properly by miners and didn't cause any unexpected behavior or errors, however, they highlighted some important challenges for applications building large sets of transactions:

  1. "Stuck" transactions can happen under certain conditions where peers have their mempool full and the rest of the ancestor chain does not get added to the next block.
  2. "Stuck" transactions need to be re-broadcasted by process defined in all implementations of FLO where a user interacts with the chain directly (qt, js-oip, etc).

Before proceeding with this upgrade, we should know the answer to these questions:

  1. Under what conditions do transactions get stuck due to mempool issues?
  2. If a client generates a transaction that has ancestors above the allowed limit, does it accept that transaction into its own mempool?
  3. How do other nodes react to a node broadcasting a transaction in question (2) above?
bitspill commented 5 years ago
  1. If a client generates a transaction that has ancestors above the allowed limit, does it accept that transaction into its own mempool?
  2. How do other nodes react to a node broadcasting a transaction in question (2) above?

Transactions exceeding the limit are rejected whether relayed from the network or locally created https://github.com/floblockchain/flo/blob/2fe0451/src/validation.cpp#L634-L643

cchrysostom commented 5 years ago

Are the stuck transactions orphans where the parent utxo doesn't exist in either mempool or within a block?

The tests we ran were handled properly by miners and didn't cause any unexpected behavior or errors, however, they highlighted some important challenges for applications building large sets of transactions:

  1. "Stuck" transactions can happen under certain conditions where peers have their mempool full and the rest of the ancestor chain does not get added to the next block.
  2. "Stuck" transactions need to be re-broadcasted by process defined in all implementations of FLO where a user interacts with the chain directly (qt, js-oip, etc).

Before proceeding with this upgrade, we should know the answer to these questions:

  1. Under what conditions do transactions get stuck due to mempool issues?
  2. If a client generates a transaction that has ancestors above the allowed limit, does it accept that transaction into its own mempool?
  3. How do other nodes react to a node broadcasting a transaction in question (2) above?