Open kteague opened 4 years ago
This is a little thornier than I thought.
If you delete the CloudFormation, the LogGroups are deleted. However, if the LogGroups stack is deleted, but instances are still running which log to those LogGroups, the next time they send logs, CloudWatch will simply make new LogGroups for them (and set to Never Expire for retention). Then if you try and re-create the LogGroups with Paco it will hit an error as the LogGroups already exist.
Currently the order for an ASG with a LogGroup LaunchBundle is:
provision:
delete:
So typically this scenario shouldn't happen. If stacks are manually deleted (e.g. manually delete a LogGroup then run "paco provision -n netenv.mynet.env") then LogGroups could be auto-created that break Paco.
It might be helpful to have a field to explicitly delete existing LogGroups before provisioning that could be used in test/scratch environments where fiddling around can get you into such a state.
Another solution that could solve this feature is to have Paco detect if any of the LogGroups already exist, and if so it does a CloudFormation Import on those groups, so they simply get brought into CloudFormation control. This would be the cleanest but it's a non-trivial feature to implement.
When Paco creates a LogGroup, if you delete the environment with that LogGroup, CloudFormation leaves the LogGroup behind. Then you need to manually delete the LogGroup before you can re-create that environment.
If LogGroups had a field similar to S3 Buckets "deletion_policy: delete" then Paco could have a hook that would delete the LogGroup and it's contents on deletion of the CFN stack.