Closed rclark closed 9 years ago
is this sufficient, or do you think that we ought to expose an option to entirely disable retries on throughput throttling?
Does it make sense to only use the aws-sdk retries so there is only 1 place to adjust number of attempts?
It might make more sense, however dyno also retries if you send batch requests and some of the sub-requests fail. I think its possible to simplify this more, but the solution in this PR right now was the quickest way I saw to pull the plug on throughput retries, retain the batch retries, and not clobber the config too much. I'll take another look / consider clobbering a little more.
@rclark ok, I'm cool with shipping for now this and reevaluating how we handle retries of all in another release if you want.
I took another look and revamped a bit. You can specify two retry values, one for throughput, one for batch requests. You can specify these values client-wide via the initial config
object, or you can override for a particular request in the options
object.
In the throughput case, retries are implemented by adjusting the number of times aws-sdk will retry the request.
In the batch request case, retries are entirely handled by dyno.
Dyno's retries on
ProvisionedThroughputExceededException
errors may be too aggressive for some situations. This PR requires you to opt-in to dyno-managed retries.config.retryAggressively = true
when you configure your dyno instanceoptions.throughputAttempts = X
when you make any individual requestIf you set neither of these things, dyno itself will not retry any requests that fail with a throughput exception. However the aws-sdk will retry in these situations up to 10 times over the course of ~30 seconds.
@willwhite @mick is this sufficient, or do you think that we ought to expose an option to entirely disable retries on throughput throttling?