ekristen / aws-nuke

Remove all the resources from an AWS account
https://ekristen.github.io/aws-nuke/
MIT License
271 stars 27 forks source link

[ Github runner error ] - Not explicit error when running nuke with github runner and a largely used AWS account #304

Closed brice-jacquin closed 2 months ago

brice-jacquin commented 2 months ago

Hello,

I'm running in an implementation recurrent error that i'm not able to fix for now.

Context: I'm implementing for a company, a monthly cron that will wipe out all ressources on 4 AWS accounts that are heavily used as sandboxes by users. I also have 1 "dev" account where I can try all my automation, with very few ressources.

Issue: When testing on the "dev" account, everything works more than fine.

When testing, in dry run, aws nuke on heavily used account:

[manual-nuke] The hosted runner: GitHub Actions 174 lost communication with the server. Anything in your workflow that terminates the runner process, starves it for CPU/Memory, or blocks its network access can cause this error.

Being on github, It is difficult to have all the logs, but it do not seems to have specifc error logs from nuke (except for "normal" errros such as denied API or whatever ).

Have you ever been confronted to such issue ? thanks for your help and time

ekristen commented 2 months ago

First off, that's impressive, 1.4 million items to nuke. This is a pure memory problem more than anything else. It has to store data for 1.4 million resources, that's a ton. For that many resources you are looking at needing probably somewhere between 24-32gb of memory at a minimum.

I would suggest doing the initial run on your laptop and then use GitHub actions to do the periodic clean once it's to a more manageable size.

brice-jacquin commented 2 months ago

Hello,

Thanks a lot for your quick return, really appreciate it. I was thinking something like that, but wasn't sure that nuke put all it's objects to delete in memory. So I guess my options are:

The most of the objects are s3 objects, which each file ( and version of a file) counts for 1 object. And yeah, the accounts have not been cleaned since years and we have a lot of s3 buckets filled with tons of unused files.

Thanks a lot

ekristen commented 2 months ago

@brice-jacquin I would 100% recommend adding S3Object to the resource types excludes list. S3Bucket will handle removing all the objects if you are going to remove the bucket. The only time S3Object is useful is if you need to clean specifics out of a bucket and leave the bucket. It's much more manageable that way.

brice-jacquin commented 2 months ago

@ekristen Thanks so much, it worked easily with excluding s3 objects, didn't knew that nuke deleted s3 objects when deleting the bucket. indeed, the tons of items in s3 were acting as a memory leak for my job.

Thanks for the help and reactivity. And thanks for the nice work and taking back the nuke project.

ekristen commented 2 months ago

Glad that helped! Appreciate the kind words!