hashicorp / terraform-provider-aws

The AWS Provider enables Terraform to manage AWS resources.
https://registry.terraform.io/providers/hashicorp/aws
Mozilla Public License 2.0
9.81k stars 9.16k forks source link

Elasticache security group change forces new resource #4295

Closed ghost closed 4 years ago

ghost commented 6 years ago

This issue was originally opened by @agentreno as hashicorp/terraform#17907. It was migrated here as a result of the provider split. The original body of the issue is below.


When adding new EC2 security groups to an Elasticache security group, a new resource is forced. However, destroying the existing elasticache security group is not permitted because it remains associated with the cache:

Terraform will perform the following actions:

-/+ aws_elasticache_security_group.test_cache (new resource required)
      id:                              "test_cache" => <computed> (forces new resource)
      description:                     "Managed by Terraform" => "Managed by Terraform"
      name:                            "test_cache" => "test_cache"
      security_group_names.#:          "1" => "2" (forces new resource)
      security_group_names.2701447893: "" => "group two" (forces new resource)
      security_group_names.3399671362: "group one" => "group one"
Error: Error applying plan:

1 error(s) occurred:

* aws_elasticache_security_group.test_cache (destroy): 1 error(s) occurred:

* aws_elasticache_security_group.test_cache: InvalidCacheSecurityGroupState: Cannot delete the security group because at least one cache cluster is still a member: test-cache.
        status code: 400, request id: e184b482-44b5-11e8-8b36-73b27fe25d20

I don't believe a new resource should be forced, since it is possible without a new resource in the AWS dashboard and potentially via the API using ModifyCacheCluster (not entirely sure if that is just Cache -> SG associations though rather than modifying existing SG). https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyCacheCluster.html

Something similar was raised in this solved ticket - it's possibly a regression? https://github.com/hashicorp/terraform/issues/2303

Reproduce using config below, or by cloning https://github.com/agentreno/terraform-elasticache-modify-issue and applying the config, then uncomment line 29, and run a plan and apply. Don't forget to destroy :)

resource "aws_security_group" "group_one" {
    name = "group one"
    description = "Testing SG for terraform issue"

    ingress {
        from_port = 0
        to_port = 22
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }
}

resource "aws_security_group" "group_two" {
    name = "group two"
    description = "Testing SG for terraform issue"

    ingress {
        from_port = 0
        to_port = 80
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }
}

resource "aws_elasticache_security_group" "test_cache" {
    name =  "test_cache"
    security_group_names = [
        "${aws_security_group.group_one.name}",
        # "${aws_security_group.group_two.name}"
    ]
}

resource "aws_elasticache_cluster" "test_cache" {
    cluster_id = "test-cache"
    engine = "redis"
    node_type = "cache.t1.micro"
    port = 6379
    num_cache_nodes = 1
    parameter_group_name = "default.redis3.2"
    security_group_names = ["${aws_elasticache_security_group.test_cache.name}"]
}
github-actions[bot] commented 4 years ago

Marking this issue as stale due to inactivity. This helps our maintainers find and focus on the active issues. If this issue receives no comments in the next 30 days it will automatically be closed. Maintainers can also remove the stale label.

If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thank you!

ghost commented 4 years ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!