Closed zhy827827 closed 10 months ago
Above 1mil transactions today, cause of DotOrdinals inscriptions
What polkadot api is this you are talking about?
this is most likely a bug in sidecar too slow to handle all the extrinsics
thanks for reporting, indeed, this seems to be an issue within API Sidecar, we are looking into it.
We are facing the same issue, it seems right after a start the response time is already huge... but after sequential requests it becomes, much much worst.
GET /blocks/18685671 200 41512ms
GET /blocks/18685673 200 41645ms
GET /blocks/18685637 200 132598ms
GET /blocks/18685670 200 43180ms
GET /blocks/18685660 200 76406ms
GET /blocks/18685644 200 78103ms
GET /blocks/18685654 200 51118ms
GET /blocks/18685637 200 112040ms
We tried increasing a lot our cpu/memory and also --max-old-space-size
but it doesn't seem to improve that much.
This was addressed by our latest release.
This release focuses on improving the performance of the tool resolving a regression where blocks
were overwhelmed with transactions. The noFees
query parameter focuses on removing fee info for the blocks if the user does not intend on needing fees. For more general cases where fees are necessary we have increased the performance of querying /blocks
while also calculating fees. This was done with 2 cases: ensuring transactionPaidFee
, and ExtrinsicSuccess
or ExtrinsicFailure
info is used to its fullest so we can avoid making any additional rpc calls, as well as ensuring the extrinsic's are called concurrently.
What were the performance test outcomes?
We are using the new release & noFees param, but still see 2 second response times (major improvement from the 25-30 second responses) but it is substantially slower than the sub 1 second responses of the past.
@exp0nge
We are using the new release & noFees param, but still see 2 second response times (major improvement from the 25-30 second responses) but it is substantially slower than the sub 1 second responses of the past.
What version were you using before you updated to 17.3.3? From 17.3.2 -> 17.3.3 performance has been the only change we made.
If I had to guess the reason you are seeing an increase in response time is because the average block size in terms of extrinsics has gone up dramatically. Just a day and a half ago the avg extrinsics size was probably in the low tens to single digits. Whereas now its averaging in the hundreds consistently.
But in terms of sidecar if you were to go test against older blocks you will see an increase in performance.
@TarikGul
What version were you using before you updated to 17.3.3? From 17.3.2 -> 17.3.3 performance has been the only change we made.
We went from 17.3.2 -> 17.3.3 during the start of the day for this. So we were purely in it for the performance gain.
If I had to guess the reason you are seeing an increase in response time is because the average block size in terms of extrinsics has gone up dramatically. Just a day and a half ago the avg extrinsics size was probably in the low tens to single digits. Whereas now its averaging in the hundreds consistently.
Yeah, we noticed ordinal/inscription load on other networks too. However, the indirection with the api-sidecar has added an extra layer of complication since we're entirely reliant on it to translate between the Polkadot node.
--
We run the api sidecar within the same pod next to the polkadot node in AWS EKS. From being a tiny sidecar, we allocated 8 GB req/ 16 GB limit of memory while the node has significantly less 4 GB / 8 GB. This is the only way I could think to increase the concurrent performance given the sidecar continues to perform at a pretty slow response rate. This is OK when we're not behind chain tip, but incredibly bad if we do fall behind as there's only so much we can squeeze out of each pod. The node performance doesn't seem to have been impacted at all even though the sidecar node puts so much demand on it. That makes me think there's even more performance to be had here. We have both noFee & finalizedKey set.
If we can get more performance, we can be healthier here. I was originally looking at https://github.com/paritytech/substrate-api-sidecar/issues/1361 before the report here accelerated some of that. We're happy to provide any other insights here that might help the team. I do realize with the holidays, this might be a challenge though.
Since this morning, API requests have been very slow, previously very fast
What causes it.
polkadot version: v1.5.0 sidecar:v17.3.2
The server is configured with 12 -core CPU, 64G memory, 3T SSD