hashicorp / terraform-provider-aws

The AWS Provider enables Terraform to manage AWS resources.
https://registry.terraform.io/providers/hashicorp/aws
Mozilla Public License 2.0
9.81k stars 9.16k forks source link

Error with`aws_elasticache_replication_group` ressource after remove `aws_elasticache_cluster` ressource #25320

Closed fullheart closed 3 months ago

fullheart commented 2 years ago

Community Note

Terraform CLI and Terraform AWS Provider Version

terraform -v

Terraform v1.1.9
on darwin_arm64
+ provider registry.terraform.io/hashicorp/aws v3.75.2
+ provider registry.terraform.io/hashicorp/random v3.3.1

Affected Resource(s)

Terraform Configuration Files

Initial State

variable "elasticache_subnet_name" { default = "redis-subgroup-sandbox" }

* elasticache.tf
```tf
resource "aws_vpc" "foo" {
  cidr_block = "10.0.0.0/16"

  tags = {
    Name = "tf-test"
  }
}

resource "aws_subnet" "foo" {
  vpc_id            = aws_vpc.foo.id
  cidr_block        = "10.0.0.0/24"
  availability_zone = "us-west-2a"

  tags = {
    Name = "tf-test"
  }
}

resource "aws_elasticache_subnet_group" "redis-subgroup" {
  name       = var.elasticache_subnet_name
  subnet_ids = [aws_subnet.foo.id]
}

resource "aws_elasticache_replication_group" "core-tech" {
  replication_group_id          = var.elasticache_name
  replication_group_description = "Redis Cluster for collect Metrics for CloudWatch"

  node_type                  = "cache.t3.micro"
  engine_version             = "5.0.6"
  parameter_group_name       = "default.redis5.0"
  at_rest_encryption_enabled = true # See https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/at-rest-encryption.html
  # port = 6379 # Default port

  snapshot_retention_limit = 1 # Number of days for which ElastiCache will retain automatic cache cluster snapshots before deleting them
  snapshot_window          = "05:30-06:30"

  subnet_group_name = aws_elasticache_subnet_group.redis-subgroup.name

  # Setup two redis instances to failover, when first instance has problem
  multi_az_enabled           = true # Enable Multi-AZ Support for the replication group
  automatic_failover_enabled = true
  number_cache_clusters      = 2
}

resource "aws_elasticache_cluster" "replica" {
  count = 1

  # cluster_id           = "${aws_elasticache_replication_group.core-tech.replication_group_id}-${count.index}"
  cluster_id           = aws_elasticache_replication_group.core-tech.replication_group_id
  replication_group_id = aws_elasticache_replication_group.core-tech.id
}

After removing aws_elasticache_cluster ressource

variable "elasticache_subnet_name" { default = "redis-subgroup-sandbox" }

* elasticache.tf
```tf
resource "aws_vpc" "foo" {
  cidr_block = "10.0.0.0/16"

  tags = {
    Name = "tf-test"
  }
}

resource "aws_subnet" "foo" {
  vpc_id            = aws_vpc.foo.id
  cidr_block        = "10.0.0.0/24"
  availability_zone = "us-west-2a"

  tags = {
    Name = "tf-test"
  }
}

resource "aws_elasticache_subnet_group" "redis-subgroup" {
  name       = var.elasticache_subnet_name
  subnet_ids = [aws_subnet.foo.id]
}

resource "aws_elasticache_replication_group" "core-tech" {
  replication_group_id          = var.elasticache_name
  replication_group_description = "Redis Cluster for collect Metrics for CloudWatch"

  node_type                  = "cache.t3.micro"
  engine_version             = "5.0.6"
  parameter_group_name       = "default.redis5.0"
  at_rest_encryption_enabled = true # See https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/at-rest-encryption.html
  # port = 6379 # Default port

  snapshot_retention_limit = 1 # Number of days for which ElastiCache will retain automatic cache cluster snapshots before deleting them
  snapshot_window          = "05:30-06:30"

  subnet_group_name = aws_elasticache_subnet_group.redis-subgroup.name

  # Setup two redis instances to failover, when first instance has problem
  multi_az_enabled           = true # Enable Multi-AZ Support for the replication group
  automatic_failover_enabled = true
  number_cache_clusters      = 2
}

Output from terraform apply

[...]

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  ~ update in-place
  - destroy

Terraform will perform the following actions:

  # aws_elasticache_cluster.replica[0] will be destroyed
  # (because aws_elasticache_cluster.replica is not in configuration)
  - resource "aws_elasticache_cluster" "replica" {
      - arn                      = "arn:aws:elasticache:us-east-1:505049265445:cluster:core-tech-sandbox" -> null
      - availability_zone        = "us-east-1a" -> null
      - az_mode                  = "single-az" -> null
      - cache_nodes              = [
          - {
              - address           = "core-tech-sandbox.xc7syl.0001.use1.cache.amazonaws.com"
              - availability_zone = "us-east-1a"
              - id                = "0001"
              - port              = 6379
            },
        ] -> null
      - cluster_id               = "core-tech-sandbox" -> null
      - engine                   = "redis" -> null
      - engine_version           = "5.0.6" -> null
      - engine_version_actual    = "5.0.6" -> null
      - id                       = "core-tech-sandbox" -> null
      - maintenance_window       = "mon:04:00-mon:05:00" -> null
      - node_type                = "cache.t3.micro" -> null
      - num_cache_nodes          = 1 -> null
      - parameter_group_name     = "default.redis5.0" -> null
      - port                     = 6379 -> null
      - replication_group_id     = "core-tech-sandbox" -> null
      - security_group_ids       = [] -> null
      - security_group_names     = [] -> null
      - snapshot_retention_limit = 0 -> null
      - snapshot_window          = "05:30-06:30" -> null
      - subnet_group_name        = "redis-subgroup-sandbox" -> null
      - tags                     = {} -> null
      - tags_all                 = {} -> null
    }

  # aws_elasticache_replication_group.core-tech will be updated in-place
  ~ resource "aws_elasticache_replication_group" "core-tech" {
        id                            = "core-tech-sandbox"
      ~ member_clusters               = [
          - "core-tech-sandbox",
          - "core-tech-sandbox-001",
          - "core-tech-sandbox-002",
        ] -> (known after apply)
      ~ number_cache_clusters         = 3 -> 2
        tags                          = {}
        # (26 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

Plan: 0 to add, 1 to change, 1 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes
aws_elasticache_cluster.replica[0]: Destroying... [id=core-tech-sandbox]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 10s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 20s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 30s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 40s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 50s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 1m0s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 1m10s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 1m20s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 1m30s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 1m40s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 1m50s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 2m0s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 2m10s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 2m20s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 2m30s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 2m40s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 2m50s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 3m0s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 3m10s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 3m20s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 3m30s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 3m40s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 3m50s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 4m0s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 4m10s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 4m20s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 4m30s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 4m40s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 4m50s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 5m0s elapsed]
aws_elasticache_cluster.replica[0]: Still destroying... [id=core-tech-sandbox, 5m10s elapsed]
aws_elasticache_cluster.replica[0]: Destruction complete after 5m17s
aws_elasticache_replication_group.core-tech: Modifying... [id=core-tech-sandbox]
╷
│ Error: error modifying ElastiCache Replication Group (core-tech-sandbox) clusters: error removing ElastiCache Replication Group (core-tech-sandbox) replicas: NoOperationFault: Requested new replica count already matches current number of replicas.
│       status code: 400, request id: 3bcde18a-02cb-473f-b6ef-5d360d1f67de
│ 
│   with aws_elasticache_replication_group.core-tech,
│   on elasticache.tf line 6, in resource "aws_elasticache_replication_group" "core-tech":
│    6: resource "aws_elasticache_replication_group" "core-tech" {
│ 
╵
Releasing state lock. This may take a few moments...
ERRO[0373] 1 error occurred:
        * exit status 1

Expected Behavior

When remove a aws_elasticache_cluster ressource, terraform should not throw Requested new replica count already matches current number of replicas. error, when execute terraform apply. Because its obvious (from my perspective) that , that the attribute number_cache_clusters decrease by 1, when remove 1 aws_elasticache_cluster ressource.

Actual Behavior

Get Error (see log above).

Steps to Reproduce

  1. Create variable.tf and elasticache.tf file (see Initial State above)
  2. terraform apply
  3. Change elasticache.tf like mentioned above in After removing aws_elasticache_cluster ressource section
  4. terraform apply
github-actions[bot] commented 4 months ago

Marking this issue as stale due to inactivity. This helps our maintainers find and focus on the active issues. If this issue receives no comments in the next 30 days it will automatically be closed. Maintainers can also remove the stale label.

If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thank you!

github-actions[bot] commented 2 months ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.