Closed Groxx closed 1 year ago
Changes Missing Coverage | Covered Lines | Changed/Added Lines | % | ||
---|---|---|---|---|---|
internal/common/backoff/retry.go | 30 | 32 | 93.75% | ||
internal/internal_task_pollers.go | 12 | 22 | 54.55% | ||
<!-- | Total: | 46 | 58 | 79.31% | --> |
Files with Coverage Reduction | New Missed Lines | % | ||
---|---|---|---|---|
internal/common/backoff/retry.go | 1 | 96.59% | ||
<!-- | Total: | 1 | --> |
Totals | |
---|---|
Change from base Build 01838582-1814-4eb0-a654-48c8844fff22: | 0.04% |
Covered Lines: | 12648 |
Relevant Lines: | 19706 |
Merging, will try to follow up this week with a cleanup (if feasible, given the custom behavior I remember... I suspect it won't be, but worth checking on anyway).
Builds on #1167, but adds delay before retrying service-busy errors.
For now, since our server-side RPS quotas are calculated per second, this delays at least 1 second per service busy error. This is in contrast to the previous behavior, which would have retried up to about a dozen times in the same period, which is the cause of service-busy-based retry storms that cause lots more service-busy errors.
This also gives us an easy way to make use of "retry after" information in errors we return to the caller, though currently our errors do not contain that.
Eventually this should probably come from the server, which has a global view of how many requests this service has sent, and can provide a more precise delay to individual callers. E.g. currently our server-side ratelimiter works in 1-second slices... but that isn't something that's guaranteed to stay true. The server could also detect truly large floods of requests, and return jittered values larger than 1 second to more powerfully stop the storm, or to allow prioritizing some requests (like activity responses) over others simply by returning a lower delay.