Closed hansieodendaal closed 4 months ago
3 files 120 suites 35m 58s :stopwatch: 1 275 tests 1 275 :white_check_mark: 0 :zzz: 0 :x: 3 817 runs 3 817 :white_check_mark: 0 :zzz: 0 :x:
Results for commit cc3258d6.
:recycle: This comment has been updated with latest results.
2 files + 2 11 suites +11 46m 9s :stopwatch: + 46m 9s 33 tests +33 30 :white_check_mark: +30 0 :zzz: ±0 3 :x: +3 38 runs +38 32 :white_check_mark: +32 0 :zzz: ±0 6 :x: +6
For more details on these failures, see this check.
Results for commit cc3258d6. ± Comparison against base commit 1d6e0d84.
:recycle: This comment has been updated with latest results.
Not sure what to do with the failing response_too_big
unit test. I get ServerClosedRequest
instead of RpcError::RequestFailed
at the first check.
Not sure what to do with the failing
response_too_big
unit test. I getServerClosedRequest
instead ofRpcError::RequestFailed
at the first check.
The problem was that the "payload too large" message wasn't being constructed properly and, without chunking, the max frame size should be >= max response size.
I pushed fixes for these
The problem was that the "payload too large" message wasn't being constructed properly and, without chunking, the max frame size should be >= max response size.
I pushed fixes for these
Oops, did not see that. Working on it now.
The problem was that the "payload too large" message wasn't being constructed properly and, without chunking, the max frame size should be >= max response size. I pushed fixes for these
Oops, did not see that. Working on it now.
No problem :) readded the fix
Description
Removed chunking from RPC as this does not improve latency or throughput and adds code complexity.
Motivation and Context
See above.
How Has This Been Tested?
System-level tests were conducted syncing a fresh archival base node in a controlled environment over Gigabit wired LAN where only one other base node was connected. The blockchain had many full blocks and many UTXOs, as a result of a 24-hour stress test.
Header sync No difference could be seen with header sync, as expected, due to the small message sizes.
Block sync Blocks throughput increased during full block sync for both coins-split and normal transactions due to reduced latencies (data transfer time); up to 8% increased block throughput was measured when blocks were filled with normal transactions.
What process can a PR reviewer use to test or verify this change?
Review code changes and test data, or perform a similar comparative test.
Breaking Changes