hashicorp / terraform-provider-aws

The AWS Provider enables Terraform to manage AWS resources.
https://registry.terraform.io/providers/hashicorp/aws
Mozilla Public License 2.0
9.81k stars 9.15k forks source link

Elasticache: InvalidParameterCombination: Cannot set snapshotting cluster for cluster mode enabled replication group #6412

Open ghost opened 5 years ago

ghost commented 5 years ago

This issue was originally opened by @ngkuznetsov as hashicorp/terraform#19336. It was migrated here as a result of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.11.10
+ provider.aws v1.42.0

Terraform Configuration Files

resource "aws_elasticache_replication_group" "ptr_persistence" {
  automatic_failover_enabled    = true
  replication_group_id          = "${var.replication_group_id}"
  replication_group_description = "Redis cluster for PTR"
  node_type                     = "${var.node_type}"
  parameter_group_name          = "${aws_elasticache_parameter_group.ptr_persistence.name}"
  port                          = 6379
  subnet_group_name             = "${aws_elasticache_subnet_group.ptr_persistence.name}"
  security_group_ids            = ["${aws_security_group.ptr_persistence.id}"]
  snapshot_retention_limit      = "35"
  snapshot_window               = "14:19-16:00"
  maintenance_window            = "05:00-06:00"
  cluster_mode {
    num_node_groups             = 1
    replicas_per_node_group     = 2
  }
}

resource "aws_elasticache_parameter_group" "ptr_persistence" {
  name   = "${var.name}"
  family = "redis4.0"
  count = 1
  parameter {
    name  = "cluster-enabled"
    value = "yes"
  }
}

Expected Behavior

Terrafom changes snapshot configuration of Elasticache Redis cluster, which is restored from an automatic backup.

Actual Behavior

Terraform apply against restored from backup Elasticache Redis cluster fails with the error:

error updating Elasticache Replication Group (prd273-mik): InvalidParameterCombination: Cannot set snapshotting cluster for cluster mode enabled replication group.

Steps to Reproduce

  1. Provision Redis cluster with backuping (snapshot) configuration:
terraform apply -var-file=terraform.tfvars
  1. Create a backup manually or wait while a backup is created automatically.
  2. Remove Redis cluster manually in AWS console (simulation of cluster failure).
  3. Restore Redis cluster from the backup manually in AWS console.

Snapshot configuration:

$ aws elasticache describe-snapshots
{
    "Engine": "redis", 
    "CacheParameterGroupName": "prd273-mik", 
    "VpcId": "vpc-0165f1db07d228505", 
    "NodeSnapshots": [
        {
            "SnapshotCreateTime": "2018-11-09T14:19:43Z", 
            "CacheNodeId": "0001", 
            "CacheClusterId": "prd273-mik-0001-003", 
            "NodeGroupId": "0001", 
            "CacheNodeCreateTime": "2018-11-09T12:51:09.161Z", 
            "CacheSize": "5 MB"
        }
    ], 
    "NumNodeGroups": 1, 
    "SnapshotName": "automatic.prd273-mik-2018-11-09-14-19", 
    "ReplicationGroupId": "prd273-mik", 
    "AutoMinorVersionUpgrade": true, 
    "SnapshotRetentionLimit": 35, 
    "AutomaticFailover": "enabled", 
    "SnapshotStatus": "available", 
    "SnapshotSource": "automated", 
    "SnapshotWindow": "14:19-16:00", 
    "EngineVersion": "4.0.10", 
    "CacheSubnetGroupName": "prd273-mik-habitat-ptrpersistence", 
    "ReplicationGroupDescription": "Redis cluster for PTR", 
    "Port": 6379, 
    "PreferredMaintenanceWindow": "sun:05:00-sun:06:00", 
    "CacheNodeType": "cache.m1.small"
}

Always Redis cluster is restored from a backup to a NEW Cluster. The new cluster is restored with default snapshot configuratations:

    "SnapshotRetentionLimit": 0,   <----   `0` - means disabling snapshotting
    "SnapshotWindow": "23:00-00:00", <----   any available window

Restored redis configuration:

$ aws elasticache describe-cache-clusters
        {
            "Engine": "redis", 
            "AuthTokenEnabled": false, 
            "CacheParameterGroup": {
                "CacheNodeIdsToReboot": [], 
                "CacheParameterGroupName": "prd273-mik", 
                "ParameterApplyStatus": "in-sync"
            }, 
            "SnapshotRetentionLimit": 0, 
            "CacheClusterId": "prd273-mik-0001-001", 
            "CacheSecurityGroups": [], 
            "NumCacheNodes": 1, 
            "AtRestEncryptionEnabled": false, 
            "SnapshotWindow": "23:00-00:00", 
            "CacheClusterCreateTime": "2018-11-09T14:49:23.167Z", 
            "ReplicationGroupId": "prd273-mik", 
            "AutoMinorVersionUpgrade": true, 
            "CacheClusterStatus": "available", 
            "PreferredAvailabilityZone": "eu-west-1a", 
            "ClientDownloadLandingPage": "https://console.aws.amazon.com/elasticache/home#client-download:", 
            "TransitEncryptionEnabled": false, 
            "CacheSubnetGroupName": "prd273-mik-habitat-ptrpersistence", 
            "EngineVersion": "4.0.10", 
            "PendingModifiedValues": {}, 
            "PreferredMaintenanceWindow": "sun:05:00-sun:06:00", 
            "CacheNodeType": "cache.m1.small"
        }, 
  1. Run terraform apply for changing Elasticache Redis cluster configuration and get the error:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  ~ module.redis.aws_elasticache_replication_group.ptr_persistence
      replication_group_description: " " => "Redis cluster for PTR"
      security_group_ids.#:          "0" => "1"
      security_group_ids.146061859:  "" => "sg-02a25ce45ef1c23be"
      snapshot_retention_limit:      "0" => "35"
      snapshot_window:               "22:30-23:30" => "14:19-16:00"

Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

module.redis.aws_elasticache_replication_group.ptr_persistence: Modifying... (ID: prd273-mik)
  replication_group_description: " " => "Redis cluster for PTR"
  security_group_ids.#:          "0" => "1"
  security_group_ids.146061859:  "" => "sg-02a25ce45ef1c23be"
  snapshot_retention_limit:      "0" => "35"
  snapshot_window:               "22:30-23:30" => "14:19-16:00"

Error: Error applying plan:

1 error(s) occurred:

* module.redis.aws_elasticache_replication_group.ptr_persistence: 1 error(s) occurred:

* aws_elasticache_replication_group.ptr_persistence: error updating Elasticache Replication Group (prd273-mik): InvalidParameterCombination: Cannot set snapshotting cluster for cluster mode enabled replication group.
    status code: 400, request id: e60ed1be-e41f-11e8-9d89-0beb86abdd38

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

Workaround

  1. Modify the cluster manually in AWS console: set snapshot_retention_limit >0 and <35 set snapshot_window to any value like 20:00-23:00
$ aws elasticache describe-cache-clusters
        {
            "Engine": "redis", 
            "AuthTokenEnabled": false, 
            "CacheParameterGroup": {
                "CacheNodeIdsToReboot": [], 
                "CacheParameterGroupName": "prd273-mik", 
                "ParameterApplyStatus": "in-sync"
            }, 
            "SnapshotRetentionLimit": 0, 
            "CacheClusterId": "prd273-mik-0001-001", 
            "CacheSecurityGroups": [], 
            "NumCacheNodes": 1, 
            "AtRestEncryptionEnabled": false, 
            "SnapshotWindow": "20:00-23:00", 
            "CacheClusterCreateTime": "2018-11-09T14:49:23.167Z", 
            "ReplicationGroupId": "prd273-mik", 
            "AutoMinorVersionUpgrade": true, 
            "CacheClusterStatus": "available", 
            "PreferredAvailabilityZone": "eu-west-1a", 
            "ClientDownloadLandingPage": "https://console.aws.amazon.com/elasticache/home#client-download:", 
            "TransitEncryptionEnabled": false, 
            "CacheSubnetGroupName": "prd273-mik-habitat-ptrpersistence", 
            "EngineVersion": "4.0.10", 
            "PendingModifiedValues": {}, 
            "PreferredMaintenanceWindow": "sun:05:00-sun:06:00", 
            "CacheNodeType": "cache.m1.small"
        }, 
  1. Run terraform apply again:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  ~ module.redis.aws_elasticache_replication_group.ptr_persistence
      replication_group_description: " " => "Redis cluster for PTR"
      security_group_ids.#:          "0" => "1"
      security_group_ids.146061859:  "" => "sg-02a25ce45ef1c23be"
      snapshot_retention_limit:      "1" => "35"
      snapshot_window:               "20:00-23:00" => "02:00-05:00"

Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes 

module.redis.aws_elasticache_replication_group.ptr_persistence: Modifying... (ID: prd273-mik)
  replication_group_description: " " => "Redis cluster for PTR"
  security_group_ids.#:          "0" => "1"
  security_group_ids.146061859:  "" => "sg-02a25ce45ef1c23be"
  snapshot_retention_limit:      "1" => "35"
  snapshot_window:               "20:00-23:00" => "02:00-05:00"
module.redis.aws_elasticache_replication_group.ptr_persistence: Still modifying... (ID: prd273-mik, 10s elapsed)
module.redis.aws_elasticache_replication_group.ptr_persistence: Still modifying... (ID: prd273-mik, 20s elapsed)
module.redis.aws_elasticache_replication_group.ptr_persistence: Still modifying... (ID: prd273-mik, 30s elapsed)
module.redis.aws_elasticache_replication_group.ptr_persistence: Modifications complete after 34s (ID: prd273-mik)

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
jsanant commented 4 years ago

@aeschright / @bflad - Is this issue resolved in the provider?

ervinb commented 4 years ago

@jsanant For what it's worth, I just ran into this with the latest provider and the proposed workaround is still working.

Zogoo commented 3 years ago

Any update or even any response from team? This is happening again Terraform 0.12.26 with aws provider 3.7.

rustlingwind commented 3 years ago

Also met this issue, with Terraform 0.14.3 and aws provider 3.37.0

amontalban commented 3 years ago

Just faced this with Terraform 1.0.4 and AWS provider 3.52.0.

davidhiebert commented 2 years ago

Running into same scenario w/ Terraform 1.1.6, AWS provider 4.2.0

euthuppan commented 2 years ago

This issue still exists for us..

spirosekulovski commented 2 years ago

+1 Just ran into this on Terraform 1.1.7 and AWS provider 4.20.1 It has been almost 4 years since this issue was first reported.

deepuashokan85 commented 1 year ago

+1 I too ran into this issue on Terraform v1.3.7.

maazmalik1 commented 1 year ago

This issue still exists on Terraform 1.3.7 and AWS provider v4.50.0

rooty0 commented 1 year ago

unbelievable, this is from 2019 and still not fixed

mmoreno43 commented 8 months ago

Still facing this issue while running terraform version 1.5.2

mijatovicdev commented 7 months ago

This is still an issue 6 years later with Terraform 1.6.6.