Open zguillen opened 3 months ago
Adding to this ticket:
We need to setup limits or alarms on the EC2 instances we use for notebooks. Specifically, when developing locally, developers need to manually shut down the instances. It's easy to forget to do, and the costs can add up quickly.
Low hanging fruit: automate shutting down all ec2 instances every night (esp. notebooks)
can we set env for dev and stage service stack to have worker and web counts set to 0
Another item to address if we want to save costs is to add a lifecycle rule to our project buckets to permanently delete files with delete markers eventually. Because we have versioning turned on we can recover deleted/edited files when necessary (this has never happened though) but it means that we're paying for files that get deleted indefinitely.
We could def add lifecycle rules to non-prod buckets to delete files permanently (but what about other items associated with drafts/records that will continue to grow [datasync tasks, s3 access points, dataset files copied to efs]
We should add some checks/automations in places to help with costs so that our dev/stage cloud instances don't eat up too much of our cloud budget. Ideas:
We also have quotas for aws resources, especially datasync related ones like, that when shared accross dev/stage/prod we will certainly reach if we don't do something. Resource that will continue to grow that I can think of (multiply what we use by 3 [per deployment]):