ekristen / aws-nuke

Remove all the resources from an AWS account
https://ekristen.github.io/aws-nuke/
MIT License
134 stars 11 forks source link

golang deadlock with s3object #356

Open vinhltr opened 12 hours ago

vinhltr commented 12 hours ago

I'm having an issue with regards to S3Object just this past week. The error initially was just a stuck nuke process, no error, even with trace log enabled. After doing some more digging, it turns out to be a specific issue in-us-east-1, and with a specific log bucket.

In my sandbox env, there is an access log bucket with roughly 67k objects (with total bucket size <50MB), this is from a single bucket. I'd estimate the grand total for all existing buckets would be roughly 70k objects. This causes a golang deadlock error when iterating over the S3Object resources. I then tried to empty the access log bucket, and retry the aws-nuke cli again, and it worked.

Screenshot 2024-10-01 at 8 50 44 AM

Some more context, my config regarding S3Object does not filter anything. And this is also the first time I've seen this issue, and I've been using aws-nuke (old repo & this fork) for almost a year now.

version: 3.24.0

ekristen commented 12 hours ago

Thanks @vinhltr I'll have to look into this more. Do you actually use S3Object on purpose. 99% of the time folks really just want S3Bucket. I'm inclined to disable S3Object by default as it's very problematic.

vinhltr commented 12 hours ago

@ekristen My understanding is to nuke an s3 bucket, I have to wipe it first, which means deleting all s3objects, so yes, I'm using S3Object on purpose.

ekristen commented 11 hours ago

You aren't wrong, but S3Bucket wipes an entire bucket clean much more efficiently using bulk api calls. S3Object should really just always be disabled.

Exclude S3Object in your configuration. You'll end up with the same result and it will be much quicker.

vinhltr commented 11 hours ago

noted, I'll make adjustment in my code, thanks @ekristen