Open niklasad1 opened 1 week ago
The CI pipeline was cancelled due to failure one of the required jobs. Job name: test-linux-stable 2/3 Logs: https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/7365923
bot fmt
@niklasad1 https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/7424018 was started for your command "$PIPELINE_SCRIPTS_DIR/commands/fmt/fmt.sh"
. Check out https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/pipelines?page=1&scope=all&username=group_605_bot to know what else is being executed currently.
Comment bot cancel 3-924a0d42-2ead-4f34-a7fa-191f843f1881
to cancel this command or bot cancel
to cancel all commands in this pull request.
@niklasad1 Command "$PIPELINE_SCRIPTS_DIR/commands/fmt/fmt.sh"
has finished. Result: https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/7424018 has finished. If any artifacts were generated, you can download them from https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/7424018/artifacts/download.
bot fmt
@niklasad1 https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/7453031 was started for your command "$PIPELINE_SCRIPTS_DIR/commands/fmt/fmt.sh"
. Check out https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/pipelines?page=1&scope=all&username=group_605_bot to know what else is being executed currently.
Comment bot cancel 2-61091302-ac86-4924-9a9a-6dc7449838d7
to cancel this command or bot cancel
to cancel all commands in this pull request.
@niklasad1 Command "$PIPELINE_SCRIPTS_DIR/commands/fmt/fmt.sh"
has finished. Result: https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/7453031 has finished. If any artifacts were generated, you can download them from https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/7453031/artifacts/download.
Close https://github.com/paritytech/polkadot-sdk/issues/5589
This PR makes it possible for
rpc_v2::Storage::query_iter_paginated
to be "backpressured" which is achieved by having a channel where the result is sent back and when this channel is "full" we pause the iteration.The chainHead_follow has an internal channel which doesn't represent the actual connection and that is set to a very small number (16). Recall that the JSON-RPC server has a dedicate buffer for each connection by default of 64.
Notes
archive_storage
also depends onrpc_v2::Storage::query_iter_paginated
I had to tweak the method to support limits as well. The reason is that archive_storage won't get backpressured properly because it's not an subscription. (it would much easier if it would be a subscription in rpc v2 spec because nothing against querying huge amount storage keys)query_iter_paginated
doesn't necessarily return the storage "in order" such asquery_iter_paginated(vec![("key1", hash), ("key2", value)], ...)
could return them in arbitrary order because it's wrapped in FuturesUnordered but I could change that if we want to process it inorder (it's slower)chainHead_v1_storage call
rather than the rpc max message limit which 10MB and only allowed to max 16 callschainHead_v1_x
concurrently (this should be fine)Benchmarks using subxt on localhost
The reason for this is because as Josep explained in the issue is that one is only allowed query five storage items per call and clients has make lots of calls to drive it forward..