Open gunnarvelle opened 6 years ago
Hi @gunnarvelle! Thanks for this proposal.
I'm not sure I fully understand the context here since I'm not familiar with CodeDeploy. I think you are saying that when the aws_codedeploy_deployment_group
is updated it somehow re-registers that lifecycle hook, and so forcing the resource to be updated would effectively re-create that hook if it's been deleted.
Assuming I understood that correctly, a downside of this would be that it would never be possible to run terraform apply
without getting a diff, and we generally expect a Terraform configuration to stabilize after all of its resources are updated, so no diff is produced until something is actually changed.
An alternative solution to this problem (again, assuming I understood correctly) would be for the "read" function for this resource type (which is used during the "Refreshing" step we run before generating a plan) to check if the required lifecycle hook is present, and if not to leave some record of that in the state so that we can act on it when a plan is subsequently created. This would then make it work as expected without any special configuration.
This alternative approach assumes that there is a way for us to determine unambiguously via the API whether the required lifecycle hook is present. (It sounds like this might be difficult due to the naming scheme used, since we may not be able to predict the generated id. Is that right?)
The way I would probably approach this, if the above assumptions hold, is:
launch_configuration_lifecycle_hook_name
(or similar) which, on create, contains the generated hook name.CustomizeDiff
, check if this attribute has the empty string as its value, and if so update the diff to mark the new value as <computed>
, which will then cause Terraform to produce a diff like the following, which will thus prompt an update: launch_configuration_lifecycle_hook_name: "" => <computed>
(It looks like there are actually potentially multiple auto-scaling groups associated, and thus multiple launch configuration hook names, and so in practice this would probably need to be a map from autoscaling group name to lifecycle hook name to fully model the problem.)
Hello, @apparentlymart. You understood correctly. The lifecycle hook is restored when the deployment group is saved. And with a different id as part of the name.
I can see why you are hesistant to add such a feature, since it would lead to always getting a diff in terraform. My goal was only to suggest a generic solution to my problem. I assume there are other cases where creating a resource would in turn create other resources. That said, I believe your second solution would be just the thing I need.
Something that might make this difficult is the lifecycle hook itself. Not only is the naming scheme difficult, but the notification_target_arn for the hook is to a amazon sqs target that is dependent of the region. For eu-central-1 the value is arn:aws:sqs:eu-central-1:355390497874:razorbill-eu-central-1-prod-default-autoscaling-lifecycle-hook
. I have no idea what this might be in other regions.
Some aws resources, when saved, in turn creates new resources outside of terraform config. If these resources are changed, the changes are not discovered by terraform. Adding config to terraform to create these other resources upfront, only leads to duplicate resources where the terraform managed is unused.
An example is if you create an aws_codedeploy_deployment_group connected to an aws_autoscaling_group, the deployment group adds a lifecycle hook to the autoscaling group. This lifecycle hook is named CodeDeploy-managed-automatic-launch-deployment-hook-application-generated_id and can not be created using terraform. If the hook is deleted, the autoscaling group will never be able to launch new instances. Saving the deployment group without changes in the aws console recreates the hook.
What I would like is a new meta lifecycle parameter "always_update" which marks the resource for saving on every apply, even if there are no other changes in the state.
Terraform Version
Terraform Configuration File
References
https://github.com/terraform-providers/terraform-provider-aws/issues/2993