Open vinhltr opened 12 hours ago
Thanks @vinhltr I'll have to look into this more. Do you actually use S3Object
on purpose. 99% of the time folks really just want S3Bucket
. I'm inclined to disable S3Object
by default as it's very problematic.
@ekristen My understanding is to nuke an s3 bucket, I have to wipe it first, which means deleting all s3objects, so yes, I'm using S3Object on purpose.
You aren't wrong, but S3Bucket
wipes an entire bucket clean much more efficiently using bulk api calls. S3Object
should really just always be disabled.
Exclude S3Object
in your configuration. You'll end up with the same result and it will be much quicker.
noted, I'll make adjustment in my code, thanks @ekristen
I'm having an issue with regards to S3Object just this past week. The error initially was just a stuck nuke process, no error, even with trace log enabled. After doing some more digging, it turns out to be a specific issue in-us-east-1, and with a specific log bucket.
In my sandbox env, there is an access log bucket with roughly 67k objects (with total bucket size <50MB), this is from a single bucket. I'd estimate the grand total for all existing buckets would be roughly 70k objects. This causes a golang deadlock error when iterating over the S3Object resources. I then tried to empty the access log bucket, and retry the aws-nuke cli again, and it worked.
Some more context, my config regarding S3Object does not filter anything. And this is also the first time I've seen this issue, and I've been using aws-nuke (old repo & this fork) for almost a year now.
version: 3.24.0