Open alexandrnikitin opened 9 months ago
The check only happens when the server starts. How is that the hot path?
I'm also surprised to see it from AWS. We have dozens of worker nodes and thousands of builds per day but it's not a crazy number. But I frequently see that error in the logs.
I see that others also reported the same or similar issues: https://github.com/mozilla/sccache/issues/1485 https://github.com/mozilla/sccache/issues/1485#issuecomment-1375160422 And PRs to mitigate it https://github.com/mozilla/sccache/pull/1557
S3 has rate limits: many reads and writes to a single key can hit rate limits far before the underlying partition is rate limited. Even 20-30 PUTs on a single key within a very short period of time will exhaust it.
On versioned buckets this is lower, especially if there are many millions of versions may exist with this key.
Hey, I'm seeing a lot rate limiting errors at storage check (s3 backend). The
".sccache_check"
file that is used for that check is on the hot path. What do you think if we make it configurable and expose it as an environment variable? Each actor can have it's own file that checks for read/write access. That would help to mitigate the issue. WDYT?Example of the error:
The code:
https://github.com/mozilla/sccache/blob/69be5321d2c2c125881b6edfed96676572b0ca03/src/cache/cache.rs#L481-L544