aws / aws-sdk-go-v2

AWS SDK for the Go programming language.
https://aws.github.io/aws-sdk-go-v2/docs/
Apache License 2.0
2.52k stars 607 forks source link

Implement S3's diverse-IP performance recommendation internally #2331

Open josh-newman opened 3 years ago

josh-newman commented 3 years ago

Is your feature request related to a problem? Please describe.

My team runs batch data processing jobs using dozens of machines in EC2. The machines tend to boot up at the same time, and then each reads 10,000s of files from S3. Sometimes, this loading process is significantly slowed by S3 throttling (503 SlowDowns, connection resets, etc.), likely depending on S3's internal scaling for the (many) prefixes involved (we observed this before and after 2018-07-17), maybe even the number of concurrent jobs.

S3 performance recommendations say:

Finally, it’s worth paying attention to DNS and double-checking that requests are being spread over a wide pool of Amazon S3 IP addresses. DNS queries for Amazon S3 cycle through a large list of IP endpoints. But caching resolvers or application code that reuses a single IP address do not benefit from address diversity and the load balancing that follows from it.

I observed that the AWS-provided DNS resolver in our VPC seemed to internally cache results for S3 hostnames (bucket.us-west-2.s3.amazonaws.com) for around 4 secs each. Since our machines initiate 10,000s of S3 object reads shortly after booting (and also periodically throughout the job—it works in phases), this apparently led to them connecting to relatively few S3 peers (demonstration program). I think this led to throttling even when our request rates were below S3's theoretical limits.

Describe the solution you'd like

It'd be great if the SDK handled this internally, transparently (for example, diversifying connection pools).

Describe alternatives you've considered

We're trying out a workaround: a custom "net/http".RoundTripper implementation that rewrites requests to spread load over all known S3 peers. Over time (over many VPC DNS cache intervals) we resolve more S3 IPs, spreading load over many peers and avoiding throttling (in our experience so far). However, this implementation is relatively inelegant and inconvenient, and there are probably better ways to handle this.

In other issues I've seen recommendations to use s3manager to retry throttling errors. Unfortunately I don't think we can use that in our application because we're interested in streaming (read, compute, discard), buffering in memory or on local disk might increase costs. Also, that seems to use the same HTTP client as the regular interface, so I'd expect it to succeed slowly, whereas connecting to more peers could succeed quickly.

Additional context

I noticed that issues aws/aws-sdk-go#1763, aws/aws-sdk-go#3707, aws/aws-sdk-go#1242 mention throttling so there's a chance those users could benefit from this, too.

CC @jcharum @yasushi-saito

skotambkar commented 3 years ago

We do not have plans to implement this in V1 SDK. But we may potentially implement this in our aws/aws-sdk-go-v2 SDK. Moving this feature request to track in the V2 SDK

vdm commented 1 year ago

CPP SDK 1.9 does this. https://github.com/aws/aws-sdk-cpp/wiki/Improving-S3-Throughput-with-AWS-SDK-for-CPP-v1.9#working-with-the-new-s3-crt-client

zephyap commented 1 year ago

Hey does anyone know if this made it into the SDK in the end?

aajtodd commented 1 year ago

It has not, we have no plans to implement this currently. Feel free to upvote the issue though (even better comment with your own use case/workload and why this feature would help).

RanVaknin commented 5 months ago

Hi everyone on the thread,

We have decided not to move forward with implementing this. We are not in a position to own this behavior.

I'm going to close this.

Thanks, Ran~

github-actions[bot] commented 5 months ago

This issue is now closed. Comments on closed issues are hard for our team to see. If you need more assistance, please open a new issue that references this one.

lucix-aws commented 3 weeks ago

Reopening and attaching this to the feature/s3/manager backlog, which is where we'd like to put it. The CRT-based S3 transfer manager client has done something similar if not identical.