Closed bartossh closed 11 months ago
Overcoming the transaction throughput.
We are processing around 750 transactions per second by two nodes (1 CPU, 2 GB RAM - for db) and it's clear the DB is a bottleneck. To prove my point run 2 nodes and then increase the number, You will notice that resources are still available on nodes but DB is starting to use a lot of CPU and more RAM. There we have a few options to overcome that restriction:
@kubagruszka @kmroz @dmatusiewicz-consult-red What do you think about some proxy repo replication:
What if we are going to use Redis with high redundancy for all ephemeral data such as:
The high redundancy is offered by the Kubernetes cluster that is going to secure the Redis processes (we can write to all the Redis nodes and read from the less used one)
Let's not discourage Postgres as a single-store solution. We can use Unlogged tables in Postgres to generate no WAL and are faster to update.
Unlogged tables may be a valuable use case for:
We will still have tables separated:
Irrelevant, as we are transitioning to a DAG protocol that will solve most of the issues regarding transaction throughput.
For comparison transactions per second by blockchain implementation:
We need to:
Meeting required.