Open samgiles opened 5 years ago
Also, it is possible to have a Lifecycle Rule delete an image that is currently running with autoscaling rules. If it decides to autoscale from 1 to 2 and the image doesn't exist, it goes to 0.
An update if this is being considered would be really appreciated. Automated cleaning of ECR repositories is far more risky this way.
After accidentally nuking a couple of registries when importing them as resources into Terraform, I agree this could be a very much appreciated feature.
Can I +1 this issue? I was doing a terraform apply
in a for-loop, trying to add scanning=true to everything and a typo made months ago in one of the terraform files accidentally caused a delete-then-create of a repo which nuked all previously installed images.
Something similar to termination protection on EC2 instances would be really nice
+1
Hi all! I wanted to share that this is something we've been thinking about addressing. We have done some initial work, and it's a fairly broad thing to solve for. We don't have any specifics to share right now, but did want to say here that we are doing some research around deletion protection. Please add an upvote or comment if you're interested, thanks!
Protecting ECR image deletion through console is must have feature to prevent accidental deletions.
Upvote : a need for several reasons; No longer able to track back vulnerabilities on still running components (rescan ...) Potential havoc in prod env.
+1
+1
Kindly add this feature
This would be a neat feature to have
Tell us about your request Add deletion protection to ECR repositories to prevent accidental deletion when using automation or in the console.
Which service(s) is this request for? ECR
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard? I'm trying to prevent a case where a production service is running in a container cluster, EKS or ECS, and something or someone wipes out a dependent repository in ECR, either by accident in the console, mis-config with automation (think Terraform), or otherwise. Preventing this would mitigate somewhat the need for additional backups or something.
Are you currently working around this issue? My plan is to use a tool like
reg
(https://github.com/genuinetools/reg) to try and automate backup of layers and images to S3, so in the event of loss, our most critical images can be restored automatically within moments, rather than the mass rebuild of every image we need.I might, if I have time try and open source a solution, that backs up a set of critical tags for a named repository to S3 using a lambda and events on ECR (haven't yet checked if there are appropriate events for this).