aws / aws-cdk-rfcs

RFCs for the AWS CDK
Apache License 2.0
534 stars 83 forks source link

Garbage Collection for Assets #64

Open eladb opened 5 years ago

eladb commented 5 years ago

Description

Assets which are uploaded to the CDK's S3 bucket and ECR repository are never deleted. This will incur costs for users in the long term. We should come up with a story on how those should be garbage collected safely.

Initially we should offer cdk gc which will track down unused assets (e.g. by tracing them back from deployed stacks) and offering users to delete them. We can offer an option to automatically run this after every deployment (either in CLI or through CI/CD). Later we can even offer a construct that you deploy to your environment and it can do that for you.

Proposed usage:

cdk gc [ENVIRONMENT...] [--list] [--type=s3|ecr]

Examples:

This command will find all orphaned S3 and ECR assets in a specific AWS environment and will delete them:

cdk gc aws://ACCOUNT/REGION

This command will garbage collect all assets in all environments that belong to the current CDK app (if cdk.json exists):

cdk gc

Just list orphaned assets:

cdk gc --list

Roles

Role User
Proposed by @eladb
Author(s) @kaizen3031593
API Bar Raiser @njlynch
Stakeholders @rix0rrr @nija-at

See RFC Process for details

Workflow


Author is responsible to progress the RFC according to this checklist, and apply the relevant labels to this issue so that the RFC table in README gets updated.

sam-goodwin commented 5 years ago

How do we generate keys when uploading? Just random?

If it were deterministic, we could use lifecycle rules of object versions: https://aws.amazon.com/about-aws/whats-new/2014/05/20/amazon-s3-now-supports-lifecycle-rules-for-versioning/

That way objects are only considered for deletion if they are not 'current'.

sam-goodwin commented 5 years ago

Now that I think about it, versioning would only help if we were regularly re-uploading files. That's probably not the case?

eladb commented 5 years ago

Life cycle rules sounds like a good option for sure. The object keys are based on the hash of the contents of the asset, so to avoid uploading in case the content hasn't changed (code)

rix0rrr commented 5 years ago

I'm not sure this works if there are two stacks, and only one is deployed with a new version of the asset.

In my mind, the other stack should still point to the old version of the asset (should not be automatically updated), but now the asset will be aged out after a while and the undeployed stack will magically break after a certain time.


Alternative idea (but much more involved and potentially not Free Tier): a Lambda that runs ever N hours which enumerates all stacks, detects the assets that are in use (from stack parameters or in some other way) and clears out the rest?

rix0rrr commented 5 years ago

Now that I think about it, versioning would only help if we were regularly re-uploading files. That's probably not the case? I thought you meant expiration for non-current versions. The latest version still stays alive indefinitely.

But I think this runs afoul of reuse across stacks.

eladb commented 5 years ago

:+1: on garbage collecting lambda that runs every week or so, with ability to opt out and some cost warnings on docs and online

sam-goodwin commented 5 years ago

Seems risky. What if a deployment happen while crawling?

eladb commented 5 years ago

Yeah perhaps only collect old assets (month old) and we can salt the object key such that if a whole month had passed, it will be a new object?

Thinking out loud.... requires a design

thekevinbrown commented 4 years ago

I've got quite a few assets in my bucket now after a month or so of deploying from CD.

How do I determine which ones are in use, even manually? I can't seem to figure out the correlation between the names of the files in S3 and anything else I could use to determine what's being used. The lambdas don't point back at them in any way I can see.

I want to eventually write a script to do this safely for my use case, but absent a way of telling what's being used I'm stuck.

ansman commented 3 years ago

S3 now has lifecycle rules that can automatically delete objects a number of days after creation which might be a solution too.

mlafeldt commented 3 years ago

How do I determine which ones are in use, even manually?

The ones in use are those referenced in active CloudFormation stacks deployed via CDK.

Those stack templates will include something like this:

    "GenFeedFunc959C5085": {
      "Type": "AWS::Lambda::Function",
      "Properties": {
        "Code": {
          "S3Bucket": {
            "Fn::Sub": "cdk-xxx-assets-${AWS::AccountId}-${AWS::Region}"
          },
          "S3Key": "5946c35f6797cf45370f262c1e5992bc54d8e7dd824e3d5aa32152a2d1e85e5d.zip"
        },

S3 now has lifecycle rules that can automatically delete objects a number of days after creation which might be a solution too.

Unfortunately, that won't help since old objects might still be in use, e.g. when a Lambda wasn't deployed in a while.

(It doesn't help that all assets are stored in the same "folder" either.)

ansman commented 3 years ago

Doesn't lambda copy the resources during deployment?

mlafeldt commented 3 years ago

Doesn't lambda copy the resources during deployment?

The Lambda service will cache functions, but AFAIK there's no guarantee that they will be cached forever.

ansman commented 3 years ago

I'm fairly certain that Lambda only reads the assets during deployment and that they aren't needed afterwards. You can for example deploy to lambda without using S3 for smaller assets and those aren't stored in there.

mlafeldt commented 3 years ago

Would love to read about the behavior in the docs somewhere. That 50 MB upload limit must exist for a reason. Haven't found anything so far though.

ansman commented 3 years ago

I can't find any concrete resources on this but I haven't found any docs mentioning that you cannot delete the object after deletion. Also, my lambdas doesn't have any permission to read my staging bucket nor does it mention that the object is from S3 so I doubt it's required to keep the object around.

kjpgit commented 3 years ago

I would suggest writing s3 objects as transient/YYYY/MM/DD/HASH.zip and having a lifecycle policy to remove transient/* files after 3 days. You'd get caching for builds done in the same day. This is similar to the salted hash suggested, but a lot more explicit, observable, and not subject to collisions. Also, the hash could stay the same day to day, so as to not trigger a spurious Lambda redeploy.

The main issue here is you need to pick your date / prefix only once, and stick with that, for the whole build process. You don't want to upload files at Tue 23:59 and a later process/function looks in Wednesday for the object. Perhaps just having an --transient-asset-prefix argument to cdk deploy would be enough?

Option 2 for S3 is to check the last-modifed date on objects, and just force a re-upload every N days. Then you could have a lifecycle policy to delete old objects (> N+1 days), and that shouldn't race with deploys. You'd need to be 100% sure the re-upload doesn't race with S3's lifecycle process, however. That's why I prefer the immutable objects in option 1, there is no chance of a race.

ECR is different because it's not transient, it is the long term backing store for Lambda. Just spitballing here, but if there was a "transient ECR repo" with a 7 day deletion policy, you could push new builds to that, and then during cloudformation deploy, those images would then be "copied/hardlinked" to a "runtime ECR repo" with lifetime managed by cloudformation, e.g. removed upon stack update/delete. Maybe the same thing could be accomplished if cloudformation could set ECR Tags that "pin" images in the transient repo that are in use, and tagged images are excluded from lifecycle rules. However, to avoid races, builds have to push something (e.g. at least some tiny change) to the transient ECR repo to refresh the image age (at least if the image is older than a day), so it won't be deleted right before cloudformation starts tracking / pinning it.

polothy commented 3 years ago

Word of caution: doing just a time based deletion on assets is a little risky. We have had this scenario play out:

So, unsure how you would even accomplish this, but ideally don't delete any S3 assets that are referenced in existing CFN templates.

polothy commented 3 years ago

So, unsure how you would even accomplish this, but ideally don't delete any S3 assets that are referenced in existing CFN templates.

Sorry, was reading quickly - sound like the RFC is going to try to do this, so yah!

rehanvdm commented 3 years ago

If the only reason is to prevent CFN from "freezing" if role back is required, then just having a deletion policy of say 1 month should be okay given that there will be no scenario where our deployment frequency is less than a month.

So with the assumption that our deployment frequency is much quicker than a month, we can rest assured that no old assets are referenced within the current CFN so that if role back does occur, the assets will still be present within the bucket for it to complete.

rehanvdm commented 3 years ago

We are having a discussion about this on the CDK slack here (link might not work in the near future): https://cdk-dev.slack.com/archives/C018XT6REKT/p1620291773488400

One solution a member (Julien Peron) is using, even though not ideal he said:

for this, i tried using tags on resources in cdk bucket in combination with life cycle policy. is not fully flawless, but may help you. It’s quite simple; I use one tag with 3 possible values: -current -previous -outdated At each deploy, I apply “current” value. If tag is already current, it becomes “previous”. If previous, it becomes “outdated”. Then the lifecycle policy deletes all “outdated” tagged objects

For the resource tagging, I tag all resources in the bucket at every deployment. I don’t have much resources and as the life cycle policy deletes the old ones, it’s quite fast. This method is just a one shot and is totally not bulletproof, but I believe there is something smart to do. In my case, I didn’t even add any checks before tagging. Which means, even if deploy fails, resources are tagged, which is not good. But yeah, just throwing the idea here to help. Here is a gist with the kind of script I’m using to tag: https://gist.github.com/julienperon/a048603f50ffe092a952d39672357618

Maybe his method if refined could work?

rehanvdm commented 3 years ago

We are approaching 0.5TB of assets in the staging bucket. I can only imagine how much large companies have :(

jogold commented 2 years ago

See https://github.com/jogold/cloudstructs/blob/master/src/toolkit-cleaner/README.md for a working construct that does asset garbage collection.

a-h commented 2 years ago

In my projects, we use separate AWS accounts for each environment (testing, staging, production).

The use of assets somewhat leads you down the path of one branch per environment in your CI/CD pipeline.

A typical process might be:

Problems with this approach

Suggestions

The best thing about DockerImageAssets is the simplicity of it, and the low effort to get started. It's great to just have one command to run to build and deploy everything, but I don't see how to use it for multi-environment deploys without accepting that the built software might be different to the version you tested in another environment.

Traditional workflows involve building the application code separately from the infrastructure, which we want to avoid, because the infrastructure and software are often tightly coupled, e.g. a new feature needs to push to a newly created SQS queue, or use a new DynamoDB table.

I propose optimising for a workflow that's similar to this:

export APP_VERSION=v0.0."`git rev-list --count HEAD`"-"`git rev-parse --short HEAD`

This workflow:

wz2b commented 1 year ago

I think the issue with manually deleting artifacts out of the staging bucket is that there isn't an easy way to tell what's still in use. Part of the reason for this is that any single stack description .json may refer to lots of artifact .zip files including very old ones that haven't changed but are still part of the stack. This is an even worse problem if you have multiple stacks sharing a single staging directory.

I have three stacks sharing the same bootstrap/artifact directory. In retrospect this was a mistake but I didn't think to separate them when I started. Trying to work out a manual way to do this, the thought I have is to run get-template for each of your active stacks and note what resource files (.zip) are called out, for example:

  "stack1234lambdas3triggerC83C4999": {
   "Type": "AWS::Lambda::Function",
   "Properties": {
    "Code": {
     "S3Bucket": {
      "Fn::Sub": "cdk-hnb659fds-assets-${AWS::AccountId}-us-east-1"
     },
     "S3Key": "b202ff26ae16f03f3f28d7e48d3aeb9d47201b7084e2361973f7ccdb1d3b78ed.zip"
    },

I think if you collect all those S3Keys you'll have the list of what's still being called out, and you can remove the other artifact bundles without breaking CDK. There's still the problem of the old templates (I have about 400 .json template files currently), but I'm not sure whether or not you need to save those at all. It seems like a new one gets uploaded every time. My thought here is that those files are safe to delete just by date.

createchange commented 1 year ago

Hi - I was pointed to this issue / feature request by AWS Support. I see comments which pertain to cost concerns for the various assets being retained indefinitely, but wanted to raise another concern.

Our development team uses CDK for various services, and the indefinite retention of assets had gone sight unseen until discovered by me this morning. I operate in a security capacity, and recently implemented AWS Inspector continuous scanning on container images. Findings are configured to bubble up to Security Hub.

To my surprise, my Security Hub was flooded with findings for CVEs. I have thousands upon (tens of?) thousands of findings, specifically due to the lack of garbage collection of CDK ECR images. This is going to cause a constant game of whack-a-mole until:

  1. I whitelist these images from the scanning (which I think is possible, and which maybe I should just go ahead and do?)
  2. Garbage collection is implemented

Regardless, I wanted to bring another story for why garbage collection would be appreciated.

a-h commented 1 year ago

@createchange - that's how I initially noticed the problem too.

ECR's cleanup is bit silly, in that it can be configured to delete "old" containers, but doesn't have any understanding if they're in use or not, so I wrote a tool that only deletes containers that are not in use by Lambda and ECS Task definitions.

https://github.com/a-h/cdk-ecr-asset-cleaner

It doesn't tackle the buildup of S3 related assets, but it might be useful for you.

eddie-atkinson commented 1 year ago

Has there been any movement on this? My Dev stack accumulated 144 undeleted images within a few days of people pushing test images. This involves quite significant costs which the DockerImageAsset construct does not make easy to resolve.

SamStephens commented 1 year ago

Thanks @createchange for pointing that out. I just encountered exactly the same issue.

I'd argue that this brings some security urgency to addressing this issue. The flood of spurious security alerts for images that potentially have not been deployed for years makes AWS Inspector basically unusable to protect our CDK assets in ECR.

Note there's another problem for the security inspector, and that is that it is very difficult to attribute what CDK component actually generated a given CDK assets image. This means that if a vulnerability is reported, we have to engage in detective work to find where it should be remediated.

vincent-dm commented 1 year ago

We also experience this problem: hundreds of images in ECR of which only a handful are currently being used. Please provide some way to prevent this from spinning out of control!

nivsto commented 1 year ago

We also experience the same problem - thousands of images in ECR without any easy way to filter non-used images.

tobiasfeil commented 1 year ago

Is this seriously not implemented? I'd rather use plain CFN then, except I've already migrated our whole stack to CDK and implemented new features in that context - only to realize that this vital piece of functionality, which I would have never expected to be missing, isn't there.

tenjaa commented 1 year ago

https://docs.aws.amazon.com/cdk/api/v2/docs/app-staging-synthesizer-alpha-readme.html

wz2b commented 1 year ago

I come back to this issue every few months to see if it has moved. I think the thing that goes wrong with time based deletion (as explained pretty well by @polothy) is that during a rollback the asset you're rolling back to might not be there any more, so the rollback will fail. So I think the thing we need to be able to do is figure out which assets are actually being used, not when we're deploying something new but right now.

If you go to CloudFormation in the console you can look at the active template. For example, one of my running stacks has a bunch of Lambda functions in it, and each one refers to a specific asset:

    "Code": {
     "S3Bucket": "cdk-hnb659fds-assets-334383426254369-us-east-1",
     "S3Key": "0a22be02f1325321515f5e129533e7f874b5128ea692082d328bf493c1e63.zip"
    },

If you have multiple stacks, what you would have to do to clean things up would be to look at ALL of your deployed stacks, and build a list of S3Keys that are referenced in any of them. Then, you could delete anything else.

My asset zip files are all lambdas. I'm not sure how other types of assets show up because I don't have any. But if I'm right about this, can we make an external cdk-gc program that does what I'm describing?

  1. Pull all the active stacks in the account and look for S3Bucket that points to the bucket you want to clean up, and build a list of S3Keys referenced by all stacks
  2. Delete every artifact that's not in that unified, merged list

Is that 100% safe? Somebody tell me why it's not.

Lewenhaupt commented 1 year ago

I come back to this issue every few months to see if it has moved. I think the thing that goes wrong with time based deletion (as explained pretty well by @polothy) is that during a rollback the asset you're rolling back to might not be there any more, so the rollback will fail. So I think the thing we need to be able to do is figure out which assets are actually being used, not when we're deploying something new but right now.

If you go to CloudFormation in the console you can look at the active template. For example, one of my running stacks has a bunch of Lambda functions in it, and each one refers to a specific asset:

    "Code": {
     "S3Bucket": "cdk-hnb659fds-assets-334383426254369-us-east-1",
     "S3Key": "0a22be02f1325321515f5e129533e7f874b5128ea692082d328bf493c1e63.zip"
    },

If you have multiple stacks, what you would have to do to clean things up would be to look at ALL of your deployed stacks, and build a list of S3Keys that are referenced in any of them. Then, you could delete anything else.

My asset zip files are all lambdas. I'm not sure how other types of assets show up because I don't have any. But if I'm right about this, can we make an external cdk-gc program that does what I'm describing?

  1. Pull all the active stacks in the account and look for S3Bucket that points to the bucket you want to clean up, and build a list of S3Keys referenced by all stacks
  2. Delete every artifact that's not in that unified, merged list

Is that 100% safe? Somebody tell me why it's not.

Seems reasonable and should take care of the rollback issues described earlier?

onhate commented 1 year ago

I've made this little cli (via npx) that implements this logic above, I have ran it on a dev account we have and it is looking ok so far.

https://github.com/onhate/cdk-gc

acomagu commented 1 year ago

Why should I have assets locally as well that I can download from S3? If the upload to S3 is successful, the local stuff can be deleted, and there should be a mechanism to download from S3 when needed, maybe?

awsmjs commented 10 months ago

Closing this ticket as it does not align with current priorities. We don't have the bandwidth to collaborate on design or implementation.

wz2b commented 10 months ago

I completely understand.

On Thu, Dec 14, 2023, 6:06 PM awsmjs @.***> wrote:

Closed #64 https://github.com/aws/aws-cdk-rfcs/issues/64 as completed.

— Reply to this email directly, view it on GitHub https://github.com/aws/aws-cdk-rfcs/issues/64#event-11253787619, or unsubscribe https://github.com/notifications/unsubscribe-auth/AALTMOK4KIVAEBJI5VI6WW3YJOA6HAVCNFSM4KKCTMP2U5DIOJSWCZC7NNSXTWQAEJEXG43VMVCXMZLOORHG65DJMZUWGYLUNFXW4OZRGEZDKMZXHA3TMMJZ . You are receiving this because you commented.Message ID: @.***>

SamStephens commented 10 months ago

@awsmjs sorry, but this is pathetic. AWS has given us a solution that consumes unbounded resources (which should have never passed a design review), and then consider fixing that to not be a priority? As I've said earlier on this issue, this has security implications:

The flood of spurious security alerts for images that potentially have not been deployed for years makes AWS Inspector basically unusable to protect our CDK assets in ECR.

wz2b commented 10 months ago

I have been thinking this might be better off as a standalone tool anyway, and the https://github.com/onhate/cdk-gc tool mentioned above looks like a good start to me. What might be nice would be to work out the kinks and then maybe merging it into the cdk as core functionality would be more palatable.

AlexGurtoff commented 10 months ago

@awsmjs Seriously? We're sitting on over a terabyte of useless CDK assets, accumulated over the past year. It's costing us more than 23$ a month for absolutely no reason. We're not just burning cash, but also hogging storage we don't need. In addition, there are the security aspects mentioned earlier. Considering these factors, don't you think you should prioritize addressing this issue?

SamStephens commented 10 months ago

@awsmjs it's also interesting to speculate on the environmental impact of all the wasted storage caused by this decision, considering people's concern with the environmental impact of cloud computing.

mrpackethead commented 10 months ago

@awsmjs , since when does the cdk teams priorities not align with Amazon Leadership principals?

This is deliberately causing AWS customers to spend money unnessarily.

sholtomaud commented 10 months ago

hey @awsmjs here is an idea ... umm ... maybe hire some more staff in either/both the cdk team and/or the AI team and fix this shit. $8k/yr for a CDK Landingzone is just dumb, now we don't even get garbage collection because what? Don't cry poor. Fix it and don't piss people off. You have responsibilities now so bad luck.

vincent-dm commented 10 months ago

No need for the aggressive tone; we don't know if @awsmjs has any authority over hiring.

But I do agree that, since CDK is a tool of the for-profit AWS product, maintainers should aim higher than the average volunteer-run open-source project.

I understand that staffing is a constraint and priorities have to be made, but CDK team should then inform whoever is responsible for staffing, that the CDK team size is inadequate vis-à-vis its responsibilities. And also, put such tickets on some backlog then, instead of closing them as if they do not represent a real need from the customers who are -in the end- the ones paying for this.

a-h commented 10 months ago

For S3 assets, I now have 1.8TB of junk in my test account. The staging account probably has 1/3 of that amount, and the production account 1/3 of the amount of the staging account.

So, in total, I'm probably wasting around 2.5TB of S3 storage on this project.

$0.023 per GB x 2.5TB 1024GB = 2560GB $0.023 = $58.88, or around $700 per year.

It's not a lot of money, especially compared to CloudWatch or AWS Config, but it is a waste of some money and I don't like the idea that this will continue increasing, forever, with no prospect of being reduced.

CDK's NPM stats show 1,209,474 downloads per week. Out of those, I assume a lot are waste, but I can make a guess that 1 in 20 downloads give me a usage count of around 60k teams using CDK every week. Out of those teams, I'm probably in the top 3% of use, so that's 1800 teams with the same or greater level of use than me.

If they're in the same position, that's $700 per year * 1800 teams = around $1,260,000 of waste. I don't know what the value of other roadmap items look like, but it looks like it's worth putting 2 people on it for a year to me - although it would be a net loss for AWS revenue. 😁

For ECR, I'm using my custom cleaner https://github.com/a-h/cdk-ecr-asset-cleaner which means I'm still at 28 Docker containers. I would like for AWS to take that implementation and use it for inspiration to build an official CDK version.

mrpackethead commented 10 months ago

hey @awsmjs here is an idea ... umm ... maybe hire some more staff in either/both the cdk team and/or the AI team and fix this shit. $8k/yr for a CDK Landingzone is just dumb, now we don't even get garbage collection because what? Don't cry poor. Fix it and don't piss people off. You have responsibilities now so bad luck.

Yes. all of that +1

moltar commented 10 months ago

I wholly agree with the majority here - this is a real pain and should be addressed ASAP.

However, the incentives are not there, as AWS has a direct benefit from wasted storage. Thus systems thinking will suggest that there is no incentive to prioritize fixing this. The only lever we have as a community is to find a counter-incentive that will apply pressure. I think posting comments will not cut it. I think it's a good start to voice your opinion, but I am just saying it won't be enough to change the status quo.

Meanwhile, for those who are hurting, and might be interested in alternative solutions today, I recommend looking at these two solutions:

mrpackethead commented 10 months ago

No need for the aggressive tone; we don't know if @awsmjs has any authority over hiring.

But I do agree that, since CDK is a tool of the for-profit AWS product, maintainers should aim higher than the average volunteer-run open-source project.

I understand that staffing is a constraint and priorities have to be made, but CDK team should then inform whoever is responsible for staffing, that the CDK team size is inadequate vis-à-vis its responsibilities. And also, put such tickets on some backlog then, instead of closing them as if they do not represent a real need from the customers who are -in the end- the ones paying for this.

People have been polite for long enough. We spend millions of dollars a year with AWS. Time to put customers interests first.