deepset-ai / deepset-cloud-sdk

A Python SDK to interact with deepset Cloud
Apache License 2.0
9 stars 2 forks source link

fix: rate limits uploads to s3 #163

Closed rjanjua closed 8 months ago

rjanjua commented 8 months ago

Related Issues

Proposed Changes?

Wraps ClientSession in a rate limiting class, which limits the rate of the requests to 500 requests per second.

We limit the request rate to 500 requests per second, which should be sufficiently under the threshold.

Supposedly S3 limits for POST/PUT/DELETE is 3500 req/s: https://docs.aws.amazon.com/AmazonS3/latest/userguide/optimizing-performance.html

How did you test it?

unit tests, manual test

Notes for the reviewer

Screenshots (optional)

Checklist

swarmia[bot] commented 8 months ago

✅  Linked to Bug DC-802 · SDK - many files not uploaded to s3 due to rate limiting
➡️  Part of Epic DC-794 · Test File Uploads

github-actions[bot] commented 8 months ago

Coverage report

The coverage rate went from 92.9% to 95.07% :arrow_up: The branch rate is 88%.

80% of new lines are covered.

Diff Coverage details (click to unfold) ### deepset_cloud_sdk/_s3/upload.py `80%` of new lines are covered (`93.82%` of the complete file). Missing lines: `121`
mathislucka commented 8 months ago

Why are we limiting at 500/s when the actual limit is around 3500/s?

Could we also handle it more gracefully and only limit dynamically once we actually hit the threshold?

rjanjua commented 8 months ago

Why are we limiting at 500/s when the actual limit is around 3500/s?

Could we also handle it more gracefully and only limit dynamically once we actually hit the threshold?

This is just a simple bug fix, we could do dynamic rate limiting, but the code becomes more complex since we have to handle all the relevant errors, and then throttle the uploads appropriately.

I chose 500 as a conservative but practical number.

We could increase the limit to 3500, I think that should be fine in theory, but I was only able to find one source citing the 3500 number, and it is an article about performance optimizations rather than an official rate limits of s3 table.

In practice we only see this issue with very small files. We already have a concurrent request limit, which is 120 concurrent open requests. In testing, this limit lead to the fastest overall upload speed in a desktop setting. In the majority of cases we won't be hitting 500 files uploaded to s3 per second anyway.

We should only implement the dynamic rate limiting if we find the need for it during testing or in the wild.

rjanjua commented 8 months ago

500 seem's very conservative, no? How about using 3000? That's about 85% of the limit mentioned in the official docs?

Will update to use 3000 req/s

Btw, there is also a package which is a bit more battle-tested than our stuff and which does the same https://pypi.org/project/limits/

it seems like the limits project is designed specidically for a few storage backends - I will reimplement this with pyrate-limiter instead