This improves sequential read performance on instances with multiple 100Gbps network interfaces. It controls the number of requests that are allowed in the pending queue that are classified as background, which includes at least some read requests. It also indirectly controls the "congestion threshold", which is set by default to 75% of the max background value. When the congestion threshold is reached, FUSE will stop sending the asynchronous part of readaheads from paged IO to the filesystem.
Testing on 2 NIC instances shows up to approximately 29% speed-up on a sequential read workload with 32 open files, from 76.74 to 99Gbps, for paged IO. Although we don't have enough instrumentation to fully understand the change in queueing behaviour in FUSE, we think it is likely because we're able to serve sufficient readahead requests for the object before hitting the congestion threshold when the limit is higher, thus allowing mountpoint to start prefetching later parts of the object sooner.
The value of 64 was picked by experimentation with values between 16 (the default) and 256, as well as specifically setting the congestion threshold. Increasing the value generally led to better performance up to 64, after which performance doesn't improve further (at least not significantly). We wanted to choose the lowest value that seemed reasonable for the desired performance improvement, to reduce the chance of affecting a workload that wasn't being tested.
As well as the standard regression tests, the change was tested on trn1 instances with a 256KB sequential read workload reading 32 files in parallel over 1, 2, and 4 network interfaces. It does not regress our standard benchmarks nor performance on this test with 1 NIC in use.
This change also temporarily introduces two environment variables to tune the behaviour, so we can isolate this change if a particular workload is found to regress.
Does this change impact existing behavior?
This improves performance on large instance types. There's a risk of regression for workloads we don't test.
Does this change need a changelog entry in any of the crates?
Yes, will submit a separate PR.
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license and I agree to the terms of the Developer Certificate of Origin (DCO).
Note that i updated the PR comment reducing the quoted improvement because I had originally misquoted the 1 NIC test result as the baseline, instead of the baseline 2 NIC test result.
This improves sequential read performance on instances with multiple 100Gbps network interfaces. It controls the number of requests that are allowed in the pending queue that are classified as background, which includes at least some read requests. It also indirectly controls the "congestion threshold", which is set by default to 75% of the max background value. When the congestion threshold is reached, FUSE will stop sending the asynchronous part of readaheads from paged IO to the filesystem.
Testing on 2 NIC instances shows up to approximately 29% speed-up on a sequential read workload with 32 open files, from 76.74 to 99Gbps, for paged IO. Although we don't have enough instrumentation to fully understand the change in queueing behaviour in FUSE, we think it is likely because we're able to serve sufficient readahead requests for the object before hitting the congestion threshold when the limit is higher, thus allowing mountpoint to start prefetching later parts of the object sooner.
The value of 64 was picked by experimentation with values between 16 (the default) and 256, as well as specifically setting the congestion threshold. Increasing the value generally led to better performance up to 64, after which performance doesn't improve further (at least not significantly). We wanted to choose the lowest value that seemed reasonable for the desired performance improvement, to reduce the chance of affecting a workload that wasn't being tested.
As well as the standard regression tests, the change was tested on trn1 instances with a 256KB sequential read workload reading 32 files in parallel over 1, 2, and 4 network interfaces. It does not regress our standard benchmarks nor performance on this test with 1 NIC in use.
This change also temporarily introduces two environment variables to tune the behaviour, so we can isolate this change if a particular workload is found to regress.
Does this change impact existing behavior?
This improves performance on large instance types. There's a risk of regression for workloads we don't test.
Does this change need a changelog entry in any of the crates?
Yes, will submit a separate PR.
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license and I agree to the terms of the Developer Certificate of Origin (DCO).