Open lancecwhite opened 1 week ago
Voting for Prioritization
Volunteering to Work on This Issue
Hey @lancecwhite 👋 Thank you for taking the time to raise this! Are you able to supply a sample configuration that can be used to reproduce this, and/or debug logs (redacted as needed)? That sort of information is necessary for whoever ultimately picks this up to work on it.
One immediate question I have is what value you're setting for the apply_immediately
argument. There's a note near the top of the resource documentation that mentions:
When you change an attribute, such as
num_cache_nodes
, by default it is applied in the next maintenance window. Because of this, Terraform may report a difference in its planning phase because the actual modification has not yet taken place. You can use theapply_immediately
flag to instruct the service to apply the change immediately.
Hey @lancecwhite 👋 Thank you for taking the time to raise this! Are you able to supply a sample configuration that can be used to reproduce this, and/or debug logs (redacted as needed)? That sort of information is necessary for whoever ultimately picks this up to work on it.
One immediate question I have is what value you're setting for the
apply_immediately
argument. There's a note near the top of the resource documentation that mentions:When you change an attribute, such as
num_cache_nodes
, by default it is applied in the next maintenance window. Because of this, Terraform may report a difference in its planning phase because the actual modification has not yet taken place. You can use theapply_immediately
flag to instruct the service to apply the change immediately.
Hey @justinretzolk, thanks for the reply. So currently, I am default apply_immediately to "true" for the module.
Here is a sample of our module execution:
module "memcached" {
source = "./modules/aws/elasticache"
apply_immediately = var.apply_immediately
az_mode = var.az_mode
cluster_id = var.cluster_id
create = var.create
create_cluster = var.create_cluster
create_parameter_group = var.create_parameter_group
create_replication_group = false # ensures memcached deployment rather than redis
create_subnet_group = var.create_subnet_group
description = var.description
engine = "memcached"
engine_version = var.engine_version
maintenance_window = var.maintenance_window
network_type = var.network_type
node_type = var.node_type
num_cache_nodes = var.num_cache_nodes
parameter_group_description = var.parameter_group_description
parameter_group_family = var.parameter_group_family
parameter_group_name = var.parameter_group_name
parameters = var.memcached_parameters
port = var.port
security_group_ids = var.security_group_ids
subnet_group_description = var.subnet_group_description
subnet_group_name = var.subnet_group_name
subnet_ids = var.subnet_ids
tags = var.tags
transit_encryption_enabled = var.transit_encryption_enabled
vpc_id = var.vpc_id
}
Hey @lancecwhite,
Are you able to supply the configuration of the aws_elasticache_cluster
resource within the module and where var. apply_immediately
is being set? It might also be helpful to know how the cluster_cache_nodes
output is defined in the configuration (I'm assuming it's just referencing the related attribute of aws_elasticache_cluster
, but being explicit is helpful for whoever winds up picking this up).
Hey @lancecwhite,
Are you able to supply the configuration of the
aws_elasticache_cluster
resource within the module and wherevar. apply_immediately
is being set? It might also be helpful to know how thecluster_cache_nodes
output is defined in the configuration (I'm assuming it's just referencing the related attribute ofaws_elasticache_cluster
, but being explicit is helpful for whoever winds up picking this up).
@justinretzolk
Here is an output of my local.auto.tfvars used during the terraform apply:
additional_ingress_rules = [{"description":"Ingress - xxxx - Memcached","from_port":11211,"to_port":11211,"ip_protocol":"tcp","cidr_ipv4":"x.x.x.x/20"}]
apply_immediately = true
aws_profile = "null-memcached-manager"
aws_region = "us-east-1"
cluster_id = "null-test"
default_tags = {"JobName":"Memcached-null-test","SystemName":"null","TFModule":"Memcached","TFStatePath":"s3://null-temp-s3-bucket-tf-backend/null-Memcached-null.tfstate"}
description = "null-test cluster"
engine_version = "1.6.17"
maintenance_window = "mon:09:30-mon:10:30"
memcached_parameters = []
node_type = "cache.r7g.large"
num_cache_nodes = 2
parameter_group_family = "memcached1.6"
parameter_group_name = "null-test-parameter-group"
subnet_group_description = "Subnet group for Memcached - null-test"
subnet_group_name = "null-test-subnet-group"
subnet_ids = ["subnet-xxxxxx","subnet-xxxxxxx"]
tags = {"ud:rsc:product-name":"uem memcached","ud:rsc:environment":"dev","ud:uem:datacenter":"null"}
transit_encryption_enabled = false
vpc_id = "vpc-xxxxxxx"
Could you provide a bit more detail on what's needed for cluster_cache_nodes? The output seems to just trail the actual deployment of the resource. So in the example I provided the num_cache_nodes is being set to 4 from a value of 2. I would expect to see the below:
cluster_cache_nodes:
- address: "test-memcache.xxxxx.0001.use1.cache.amazonaws.com"
availability_zone: "us-east-1a"
id: "0001"
outpost_arn: ""
port: 11211
- address: "test-memcache.xxxxx.0002.use1.cache.amazonaws.com"
availability_zone: "us-east-1b"
id: "0002"
outpost_arn: ""
port: 11211
- address: "test-memcache.xxxxx.0003.use1.cache.amazonaws.com"
availability_zone: "us-east-1c"
id: "0003"
outpost_arn: ""
port: 11211
- address: "test-memcache.xxxxx.0004.use1.cache.amazonaws.com"
availability_zone: "us-east-1a"
id: "0004"
outpost_arn: ""
port: 11211
But even with the apply_immediately this is what I see for the output:
cluster_cache_nodes:
- address: "test-memcache.xxxxx.0001.use1.cache.amazonaws.com"
availability_zone: "us-east-1a"
id: "0001"
outpost_arn: ""
port: 11211
- address: "test-memcache.xxxxx.0002.use1.cache.amazonaws.com"
availability_zone: "us-east-1b"
id: "0002"
outpost_arn: ""
port: 11211
Until the NEXT attempt to run a modification against the cluster where changes to the output now includes an addition of the two resources that were added during the previous apply.
Terraform Core Version
1.8.5
AWS Provider Version
5.65.0
Affected Resource(s)
When increasing the node count value cluster_cache_nodes for aws_elasticache_cluster the output appears to be delayed to only updating during the next execution of terraform apply. So it appears that the resource isn't making a request against the managed service to output the updated list of nodes.
Expected Behavior
expected output for cluster_cache_nodes...
cluster_cache_nodes = tolist([ { "address" = "test-memcache.xxxxx.0001.use1.cache.amazonaws.com" "availability_zone" = "us-east-1a" "id" = "0001" "outpost_arn" = "" "port" = 11211 }, { "address" = "test-memcache.xxxxx.0002.use1.cache.amazonaws.com" "availability_zone" = "us-east-1b" "id" = "0002" "outpost_arn" = "" "port" = 11211 }, { "address" = "test-memcache.xxxxx.0003.use1.cache.amazonaws.com" "availability_zone" = "us-east-1c" "id" = "0003" "outpost_arn" = "" "port" = 11211 }, { "address" = "test-memcache.xxxxx.0004.use1.cache.amazonaws.com" "availability_zone" = "us-east-1a" "id" = "0004" "outpost_arn" = "" "port" = 11211 }, ])
Actual Behavior
As an example, if you modify the node count for cluster_cache_nodes from 2 to 4, the output after the apply finishes remains:
cluster_cache_nodes = tolist([ { "address" = "test-memcache.xxxxx.0001.use1.cache.amazonaws.com" "availability_zone" = "us-east-1a" "id" = "0001" "outpost_arn" = "" "port" = 11211 }, { "address" = "test-memcache.xxxxx.0002.use1.cache.amazonaws.com" "availability_zone" = "us-east-1b" "id" = "0002" "outpost_arn" = "" "port" = 11211 }, ])
Relevant Error/Panic Output Snippet
No response
Terraform Configuration Files
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/elasticache_cluster
Steps to Reproduce
terraform apply
Debug Output
No response
Panic Output
No response
Important Factoids
No response
References
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/elasticache_cluster
Would you like to implement a fix?
None