Don't skip work if the bucket already exists, just upload everything every time you run init.
BECAUSE: This lets you add new objects to this script without obliterating all your buckets first.
Don't try and print exceptions as strings, just let python print out the stacktrace
BECAUSE: I was getting MemoryErrors because python couldn't allocate a 2GiB buffer, but MemoryError prints as an empty string, so I had no idea what was going wrong.
Always print() before doing an S3 call, instead printing after doing an S3 call
BECAUSE: It makes the cause of error, and the cause of long delays, more obvious
Remove the error-handling that would delete all your buckets if anything went wrong during init.
BECAUSE: If you try and add a new file, and something goes wrong, and the script deletes ALL of the team's buckets in response, all kinds of CI will start failing. If you really want to clean up, just run the script again with clean
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Description of changes:
init
.init
.clean
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.