Closed mikelorant closed 4 months ago
@soapiestwaffles Can you confirm if this is progress? I am looking at add this feature and don't want to duplicate the work if you are already working on it.
My plan is to use the method outlined in this issue, which is move the client instantiation into the Go routine and then add the backoff and retry logic into the client configuration.
Initial implemented in pull request #23.
@soapiestwaffles Let me know what you think and any changes or improvements.
Unit tests successfully passing.
When increasing the concurrency of the deletion tasks, AWS may send a rate limit response. This causes all other tasks to exit and an error reported.
I have attempted to address this in my own branch using the Failsafe package. While this implementation works, there is a better way.
The AWS SDK for Go has added the capability to implement a retry policy. This policy includes many behaviors including exponential backoff.
From reading the documentation, this looks relatively simple to implement.
Take note of the following warning:
This means we should move the instantiation of the S3 client into the Go routine so that the rate limit is not shared among each deletion thread.