davidkel / provision-performance

1 stars 0 forks source link

Provide or reference articles about using caliper with fabric #33

Open davidkel opened 2 years ago

davidkel commented 2 years ago

eg

davidkel commented 2 years ago

General Fabric Guidance for K8s env

davidkel commented 2 years ago

General Caliper Guidance

davidkel commented 2 years ago

Notes about fabric performance

some stats

For blind writes: note the backlog remained stable for this run (ie didn't grow gradually or exponentially so these TPS are sustainable over longer periods of time)

block_cut_time: 1s block_size: 50 preferred_max_bytes: 512 KB

+------------------+--------+------+-----------------+-----------------+-----------------+-----------------+------------------+
| Name             | Succ   | Fail | Send Rate (TPS) | Max Latency (s) | Min Latency (s) | Avg Latency (s) | Throughput (TPS) |
|------------------|--------|------|-----------------|-----------------|-----------------|-----------------|------------------|
| create-asset-100 | 360150 | 0    | 2996.2          | 2.04            | 0.28            | 0.73            | 2983.6           |
+------------------+--------+------+-----------------+-----------------+-----------------+-----------------+------------------+

Gateway Peer Max CPU: 60%, Max Memory: 5.16%, Max Disk: 51.1Mb/s For orderer: CPU max: 11%, Memory Max: 2.98%, Disk Max: 23.1 MB/s (For disk i/o I see spikes of 80MB/s which are not captured by prometheus due to 5s sampling I think)

Note I am deliberately not including any details about the machines being run as these are NOT to be considered as any sort of formal benchmark results. I will say that the machines are Baremetal running a single fabric process on each.

davidkel commented 2 years ago

what can be said about orderer parameters such a block cutting timeout and block triggering size of transaction number and max size etc ? what other parameters could affect a peer/orderer to improve performance or alter characteristics to suit a certain kind of load profile ?

davidkel commented 2 years ago

transaction throughput is significantly affected by payload size as well as ordering service settings.
You might want to try to configure the ordering service with more transactions per block and longer block cutting times to see if that helps. We have seen this increase the overall throughput at the cost of additional latency.
Your network throughput might also be a factor, particularly if your peer nodes are not running at very high CPU utilization.

davidkel commented 2 years ago

K8s specific

The following three parameters work together to control when a block is cut, based on a combination of setting the maximum number of transactions in a block as well as the block size itself.

Set the Timeout value to the amount of time, in seconds, to wait after the first transaction arrives before cutting the block. If you set this value too low, you risk preventing the batches from filling to your preferred size. Setting this value too high can cause the orderer to wait for blocks and overall performance to degrade. In general, we recommend that you set the value of Batch timeout to be at least max message count / maximum transactions per second.

davidkel commented 2 years ago

More information to be consolidated

When using external CouchDB state database, read delays during endorsement and validation phases have historically been a performance bottleneck.
With Fabric v2.0, a new peer cache replaces many of these expensive lookups with fast local cache reads. The cache size can be configured by using the core.yaml property cacheSize

Prefer to not use rich queries in chaincode, use an offchain store for that but if you do make sure your queries are optimised and indexed (so avoid queries that can't be indexed)

Review the chaincode and add CouchDB indexes for queries:

If indexes are used, review the existing indexes and queries and fine-tune the queries to reduce the number of records returned.

Do not issue open ended or "count" queries.

Do not use the $regex , $in, $and etc

State database cache for improved performance on CouchDB - I doubt this will do anything for rich queries (need to check)

Block cutting parameters

The following three parameters work together to control when a block is cut, based on a combination of setting the maximum number of transactions in a block as well as the block size itself.

Batch timeout

Set the Timeout value to the amount of time, in seconds, to wait after the first transaction arrives before cutting the block. If you set this value too low, you risk preventing the batches from filling to your preferred size. Setting this value too high can cause the orderer to wait for blocks and overall performance to degrade. In general, we recommend that you set the value of Batch timeout to be at least max message count / maximum transactions per second

davidkel commented 2 years ago

A further idea: fabric tries to do bulk update calls to couchdb to improve couchdb performance, this should be exploited however you may need to increase the bulk size if you have large transaction sizes (although large transaction sizes are a bad idea). "we increased the batch setting so huge blocks that were batched helped"

BashayerAlkalifah commented 4 months ago
  • er test can you explain how can i do this points ? 1) use remote workers, don't use local process workers 2)ensure caliper is running on a different system to the fabric network under test