cloudposse / terraform-aws-elasticache-redis

Terraform module to provision an ElastiCache Redis Cluster
https://cloudposse.com/accelerate
Apache License 2.0
141 stars 244 forks source link

Add rds security group description #114

Closed nitrocode closed 3 years ago

nitrocode commented 3 years ago

what

why

references

Gowiem commented 3 years ago

/test all

jurgen-weber-deltatre commented 3 years ago

https://github.com/cloudposse/terraform-aws-elasticache-redis/commit/80943edc3f30bd5074589aba1a65f704edde4de2#commitcomment-48017475

This breaks the module for anyone who has a currently running cluster.

nitrocode commented 3 years ago

Do you pin your module down?

jurgen-weber-deltatre commented 3 years ago

I did now to get around the problem, but this is a breaking change.

Should default to the current message and have the ability to overwrite.. not hardcode the new message

nitrocode commented 3 years ago

The security group description is hard coded in a lot of places across most cloudposse modules. This was one of the few places where it wasn't.

I haven't seen this done in any other modules, but would you prefer being able to overwrite the security group description with the default as it is now and you may override it with a null value to retain the previous unsupplied description?

marcuz commented 3 years ago

The security group description is hard coded in a lot of places across most cloudposse modules. This was one of the few places where it wasn't.

I haven't seen this done in any other modules, but would you prefer being able to overwrite the security group description with the default as it is now and you may override it with a null value to retain the previous unsupplied description?

IMHO yes, this change prevents clean module upgrades, unless you "Delete the network interface, or associate with a different security group" and fiddle with TF state after "migrating" yourself.

syphernl commented 3 years ago

We ran into the same issue this morning. I have made a fix that allows to configure the description message, so we can re-use the "Managed by Terraform" one for existing clusters.

https://github.com/cloudposse/terraform-aws-elasticache-redis/pull/115

nitrocode commented 3 years ago

After merging @syphernl's PR https://github.com/cloudposse/terraform-aws-elasticache-redis/pull/115 @marcuz and @jurgen-weber-deltatre these are the options.

In either method, please make sure to pin your modules. Using an unpinned module reference is discouraged.

marcuz commented 3 years ago

Thank you @syphernl and @nitrocode! :+1:

jurgen-weber-deltatre commented 3 years ago

Cool, Thank you.

I know pinning is encouraged but I also disagree with the mentality for many reasons. :) We won't get into that here (Happy to discuss on your slack).

IN the end these are just hacks due to limitations of the AWS API, not anyone's fault here. LOL needing to recreate based on changing the description.

vsimon commented 3 years ago

I just hit this problem when going from version 0.34 to 0.37 of this module. I pin all modules versions and review changes. This time I used 'renovate' and in the merge request I saw this new module variable that works around this was mentioned but the "Why" describing there was breaking change was cutoff. The contents of the commit message was good and predicted what ended up happening, I just might hope breaking changes are more prominently displayed or communicated.

For those that hit the problem in 0.37 will see this:

...
module.elasticache_redis.aws_security_group.default[0]: Still destroying... [id=sg-*6013, 9m50s elapsed]
module.elasticache_redis.aws_security_group.default[0]: Still destroying... [id=sg-*6013, 10m0s elapsed]
Error: Error deleting security group: DependencyViolation: resource sg-*6013 has a dependent object
    status code: 400, request id: *

Reverting/Backing out the change will also result in a timeout while still destroying the resource.

To get around the problem, I went to the AWS ElasticCache Console, select the cluster, click Modify, changed "VPC Security Group" to something else like default, click Modify to save changes. Then after doing tf apply again, the module applied cleanly.


Not sure but Issues like https://github.com/cloudposse/terraform-aws-elasticache-redis/issues/86 or https://github.com/cloudposse/terraform-aws-rds-cluster/issues/86 allude to something that may work to support running cluster cases for these kind of upgrades?