hashicorp / terraform

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is a source-available tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.
https://www.terraform.io/
Other
42.59k stars 9.54k forks source link

Elasticache: InvalidParameterCombination: Cannot set snapshotting cluster for cluster mode enabled replication group #19336

Closed ngkuznetsov closed 5 years ago

ngkuznetsov commented 5 years ago

Terraform Version

Terraform v0.11.10
+ provider.aws v1.42.0

Terraform Configuration Files

resource "aws_elasticache_replication_group" "ptr_persistence" {
  automatic_failover_enabled    = true
  replication_group_id          = "${var.replication_group_id}"
  replication_group_description = "Redis cluster for PTR"
  node_type                     = "${var.node_type}"
  parameter_group_name          = "${aws_elasticache_parameter_group.ptr_persistence.name}"
  port                          = 6379
  subnet_group_name             = "${aws_elasticache_subnet_group.ptr_persistence.name}"
  security_group_ids            = ["${aws_security_group.ptr_persistence.id}"]
  snapshot_retention_limit      = "35"
  snapshot_window               = "14:19-16:00"
  maintenance_window            = "05:00-06:00"
  cluster_mode {
    num_node_groups             = 1
    replicas_per_node_group     = 2
  }
}

resource "aws_elasticache_parameter_group" "ptr_persistence" {
  name   = "${var.name}"
  family = "redis4.0"
  count = 1
  parameter {
    name  = "cluster-enabled"
    value = "yes"
  }
}

Expected Behavior

Terrafom changes snapshot configuration of Elasticache Redis cluster, which is restored from an automatic backup.

Actual Behavior

Terraform apply against restored from backup Elasticache Redis cluster fails with the error:

error updating Elasticache Replication Group (prd273-mik): InvalidParameterCombination: Cannot set snapshotting cluster for cluster mode enabled replication group.

Steps to Reproduce

  1. Provision Redis cluster with backuping (snapshot) configuration:
terraform apply -var-file=terraform.tfvars
  1. Create a backup manually or wait while a backup is created automatically.
  2. Remove Redis cluster manually in AWS console (simulation of cluster failure).
  3. Restore Redis cluster from the backup manually in AWS console.

Snapshot configuration:

$ aws elasticache describe-snapshots
{
    "Engine": "redis", 
    "CacheParameterGroupName": "prd273-mik", 
    "VpcId": "vpc-0165f1db07d228505", 
    "NodeSnapshots": [
        {
            "SnapshotCreateTime": "2018-11-09T14:19:43Z", 
            "CacheNodeId": "0001", 
            "CacheClusterId": "prd273-mik-0001-003", 
            "NodeGroupId": "0001", 
            "CacheNodeCreateTime": "2018-11-09T12:51:09.161Z", 
            "CacheSize": "5 MB"
        }
    ], 
    "NumNodeGroups": 1, 
    "SnapshotName": "automatic.prd273-mik-2018-11-09-14-19", 
    "ReplicationGroupId": "prd273-mik", 
    "AutoMinorVersionUpgrade": true, 
    "SnapshotRetentionLimit": 35, 
    "AutomaticFailover": "enabled", 
    "SnapshotStatus": "available", 
    "SnapshotSource": "automated", 
    "SnapshotWindow": "14:19-16:00", 
    "EngineVersion": "4.0.10", 
    "CacheSubnetGroupName": "prd273-mik-habitat-ptrpersistence", 
    "ReplicationGroupDescription": "Redis cluster for PTR", 
    "Port": 6379, 
    "PreferredMaintenanceWindow": "sun:05:00-sun:06:00", 
    "CacheNodeType": "cache.m1.small"
}

Always Redis cluster is restored from a backup to a NEW Cluster. The new cluster is restored with default snapshot configuratations:

    "SnapshotRetentionLimit": 0,   <----   `0` - means disabling snapshotting
    "SnapshotWindow": "23:00-00:00", <----   any available window

Restored redis configuration:

$ aws elasticache describe-cache-clusters
        {
            "Engine": "redis", 
            "AuthTokenEnabled": false, 
            "CacheParameterGroup": {
                "CacheNodeIdsToReboot": [], 
                "CacheParameterGroupName": "prd273-mik", 
                "ParameterApplyStatus": "in-sync"
            }, 
            "SnapshotRetentionLimit": 0, 
            "CacheClusterId": "prd273-mik-0001-001", 
            "CacheSecurityGroups": [], 
            "NumCacheNodes": 1, 
            "AtRestEncryptionEnabled": false, 
            "SnapshotWindow": "23:00-00:00", 
            "CacheClusterCreateTime": "2018-11-09T14:49:23.167Z", 
            "ReplicationGroupId": "prd273-mik", 
            "AutoMinorVersionUpgrade": true, 
            "CacheClusterStatus": "available", 
            "PreferredAvailabilityZone": "eu-west-1a", 
            "ClientDownloadLandingPage": "https://console.aws.amazon.com/elasticache/home#client-download:", 
            "TransitEncryptionEnabled": false, 
            "CacheSubnetGroupName": "prd273-mik-habitat-ptrpersistence", 
            "EngineVersion": "4.0.10", 
            "PendingModifiedValues": {}, 
            "PreferredMaintenanceWindow": "sun:05:00-sun:06:00", 
            "CacheNodeType": "cache.m1.small"
        }, 
  1. Run terraform apply for changing Elasticache Redis cluster configuration and get the error:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  ~ module.redis.aws_elasticache_replication_group.ptr_persistence
      replication_group_description: " " => "Redis cluster for PTR"
      security_group_ids.#:          "0" => "1"
      security_group_ids.146061859:  "" => "sg-02a25ce45ef1c23be"
      snapshot_retention_limit:      "0" => "35"
      snapshot_window:               "22:30-23:30" => "14:19-16:00"

Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

module.redis.aws_elasticache_replication_group.ptr_persistence: Modifying... (ID: prd273-mik)
  replication_group_description: " " => "Redis cluster for PTR"
  security_group_ids.#:          "0" => "1"
  security_group_ids.146061859:  "" => "sg-02a25ce45ef1c23be"
  snapshot_retention_limit:      "0" => "35"
  snapshot_window:               "22:30-23:30" => "14:19-16:00"

Error: Error applying plan:

1 error(s) occurred:

* module.redis.aws_elasticache_replication_group.ptr_persistence: 1 error(s) occurred:

* aws_elasticache_replication_group.ptr_persistence: error updating Elasticache Replication Group (prd273-mik): InvalidParameterCombination: Cannot set snapshotting cluster for cluster mode enabled replication group.
    status code: 400, request id: e60ed1be-e41f-11e8-9d89-0beb86abdd38

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

Workaround

  1. Modify the cluster manually in AWS console: set snapshot_retention_limit >0 and <35 set snapshot_window to any value like 20:00-23:00
$ aws elasticache describe-cache-clusters
        {
            "Engine": "redis", 
            "AuthTokenEnabled": false, 
            "CacheParameterGroup": {
                "CacheNodeIdsToReboot": [], 
                "CacheParameterGroupName": "prd273-mik", 
                "ParameterApplyStatus": "in-sync"
            }, 
            "SnapshotRetentionLimit": 0, 
            "CacheClusterId": "prd273-mik-0001-001", 
            "CacheSecurityGroups": [], 
            "NumCacheNodes": 1, 
            "AtRestEncryptionEnabled": false, 
            "SnapshotWindow": "20:00-23:00", 
            "CacheClusterCreateTime": "2018-11-09T14:49:23.167Z", 
            "ReplicationGroupId": "prd273-mik", 
            "AutoMinorVersionUpgrade": true, 
            "CacheClusterStatus": "available", 
            "PreferredAvailabilityZone": "eu-west-1a", 
            "ClientDownloadLandingPage": "https://console.aws.amazon.com/elasticache/home#client-download:", 
            "TransitEncryptionEnabled": false, 
            "CacheSubnetGroupName": "prd273-mik-habitat-ptrpersistence", 
            "EngineVersion": "4.0.10", 
            "PendingModifiedValues": {}, 
            "PreferredMaintenanceWindow": "sun:05:00-sun:06:00", 
            "CacheNodeType": "cache.m1.small"
        }, 
  1. Run terraform apply again:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  ~ module.redis.aws_elasticache_replication_group.ptr_persistence
      replication_group_description: " " => "Redis cluster for PTR"
      security_group_ids.#:          "0" => "1"
      security_group_ids.146061859:  "" => "sg-02a25ce45ef1c23be"
      snapshot_retention_limit:      "1" => "35"
      snapshot_window:               "20:00-23:00" => "02:00-05:00"

Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes 

module.redis.aws_elasticache_replication_group.ptr_persistence: Modifying... (ID: prd273-mik)
  replication_group_description: " " => "Redis cluster for PTR"
  security_group_ids.#:          "0" => "1"
  security_group_ids.146061859:  "" => "sg-02a25ce45ef1c23be"
  snapshot_retention_limit:      "1" => "35"
  snapshot_window:               "20:00-23:00" => "02:00-05:00"
module.redis.aws_elasticache_replication_group.ptr_persistence: Still modifying... (ID: prd273-mik, 10s elapsed)
module.redis.aws_elasticache_replication_group.ptr_persistence: Still modifying... (ID: prd273-mik, 20s elapsed)
module.redis.aws_elasticache_replication_group.ptr_persistence: Still modifying... (ID: prd273-mik, 30s elapsed)
module.redis.aws_elasticache_replication_group.ptr_persistence: Modifications complete after 34s (ID: prd273-mik)

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
ghost commented 5 years ago

This issue has been automatically migrated to terraform-providers/terraform-provider-aws#6412 because it looks like an issue with that provider. If you believe this is not an issue with the provider, please reply to terraform-providers/terraform-provider-aws#6412.

ghost commented 4 years ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.