With parallelization and other scalability work in-flight, we need to ensure that core is configured correctly such that we can actually take advantage of performance improvements, as well as properly measure the impact. This issue tracks the work to prepare core for high throughput measurements. Here a list of tunable limits identified so far in parallelization performance testing (feel free to append to/edit the list):
[ ] Block size limit. Currently, both classic and soroban block portions are limited to 5MB each due to the overlay limit of 16MB per message. This isn't too much of a problem for testing classic TPS since those txs are small, but this is an issue when perf testing Soroban (where transactions are KBs in size).
[ ] Flow control limits. As we free up capacity on the main thread, this means core can now process more transactions at a time, yet our flow control limits are set to batches of 200, causing core to be latency-bound, and not utilizing main thread capacity properly.
[ ] Tx queue limits. Currently, tx queue limits are 2x of ledger capacity, which might be too restrictive in the scenarios where some transactions get invalidated post apply. This potentially creates a problem when we're not filling blocks to full capacity, and reject transactions prematurely. We should consider increasing transaction queue capacity, as suggested in https://github.com/stellar/stellar-core/issues/4108
With parallelization and other scalability work in-flight, we need to ensure that core is configured correctly such that we can actually take advantage of performance improvements, as well as properly measure the impact. This issue tracks the work to prepare core for high throughput measurements. Here a list of tunable limits identified so far in parallelization performance testing (feel free to append to/edit the list):