thelastpickle / cassandra-medusa

Apache Cassandra Backup and Restore Tool
Apache License 2.0
265 stars 143 forks source link

s3 upload timeout is not long enough #539

Open jeff-knurek opened 2 years ago

jeff-knurek commented 2 years ago

Project board link

It seems that there was an issue in the past about extending the retry for awscli commands: https://github.com/thelastpickle/cassandra-medusa/issues/181 And from that a change was merged in this PR: https://github.com/thelastpickle/cassandra-medusa/pull/346

However that value is hardcoded as 5. And while that amount of retries is suitable for most files, it still isn't enough for larger files (and or slow networks). Maybe the ideal solution is to allow MAX_UP_DOWN_LOAD_RETRIES to be set via env var, and then the user can decide what is appropriate for their cluster?

┆Issue is synchronized with this Jira Story by Unito ┆Issue Number: MED-37

rzvoncek commented 8 months ago

Hi. This relates to #666. I agree we give more flexibility for the retry configuration.