Closed jcarlson closed 3 months ago
Voting for Prioritization
Volunteering to Work on This Issue
AWS confirmed in a support chat that the major version upgrade occurs asynchronously
I have updated the issue with a sample configuration. I am working on getting a debug output, but because the issue is intermittent, this is taking more time.
I was able to reproduce this issue and have attached a debug log output.
One thing I have noticed, anecdotally, is that this issue appears to be more readily reproducible when the database cluster requires an intermediate upgrade from its current version before upgrading to the target version. For example, Aurora Postgresql 10.20 cannot be upgraded directly to 14.4; in order to complete this upgrade, you must first upgrade to 10.21, 11.16, or 13.6. Since 10.21 is a minor upgrade, that is the route I go.
So to replicate this scenario, I created an Aurora Postgres 10.20 cluster, then upgraded it to 10.21, then upgraded it to 14.4.
In the attached log file, skip to line 2803, where the AWS provider begins polling for the status of the database cluster to become "available". You can see in the response that the database cluster is already "available" because the upgrade has not yet started on the backend, and so the provider assumes it is "ready" and moves on to modify the database instances, which is where the failure occurs.
I attempted to work around this issue by adding a null_resource
between the aws_rds_cluster
and the aws_rds_cluster_instance
resources that would use a local-exec
provisioner to query the RDS API and determine the true status of the cluster during an upgrade:
resource "aws_rds_cluster" "main" {
allow_major_version_upgrade = true
apply_immediately = true
backup_retention_period = 1
cluster_identifier = "${var.name}-cluster"
copy_tags_to_snapshot = true
database_name = random_pet.database.id
db_cluster_parameter_group_name = aws_rds_cluster_parameter_group.main.name
db_instance_parameter_group_name = aws_db_parameter_group.main.name
db_subnet_group_name = aws_db_subnet_group.main.name
deletion_protection = false
engine = local.engine
engine_version = local.engine_version
master_password = random_password.password.result
master_username = random_pet.username.id
skip_final_snapshot = true
tags = var.tags
vpc_security_group_ids = [aws_security_group.main.id]
}
# https://github.com/hashicorp/terraform-provider-aws/issues/28339
resource "null_resource" "aws_provider_bug" {
triggers = {
db_parameter_group_name = aws_db_parameter_group.main.name
engine_version = aws_rds_cluster.main.engine_version
}
provisioner "local-exec" {
command = "${path.module}/wait-for-db-cluster.sh"
environment = {
CLUSTER_IDENTIFIER = aws_rds_cluster.main.cluster_identifier
}
}
}
resource "aws_rds_cluster_instance" "cluster_instances" {
count = 2
depends_on = [null_resource.aws_provider_bug]
apply_immediately = true
auto_minor_version_upgrade = false
cluster_identifier = aws_rds_cluster.main.id
db_parameter_group_name = aws_db_parameter_group.main.name
db_subnet_group_name = aws_db_subnet_group.main.name
engine = aws_rds_cluster.main.engine
engine_version = aws_rds_cluster.main.engine_version
identifier = "${var.name}-${count.index}"
instance_class = "db.t3.medium"
publicly_accessible = false
tags = var.tags
lifecycle {
ignore_changes = [engine_version]
}
}
#!/usr/bin/env bash
set -eo pipefail
function isDbClusterAvailableWithNoPendingModifiedValues() {
local dbCluster
dbCluster="$(
aws rds describe-db-clusters \
--db-cluster-identifier "$CLUSTER_IDENTIFIER"
)"
jq -e '.DBClusters[0] | .Status == "available" and .PendingModifiedValues == null' > /dev/null <<< "$dbCluster"
}
printf 'Waiting for database cluster status to be "available" with no pending modified values\n'
while true; do
isDbClusterAvailableWithNoPendingModifiedValues && break
printf 'Database cluster is not ready yet\n'
sleep 10
done
But this too failed occasionally with the following error:
│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for null_resource.aws_provider_bug to include new values learned so far during apply, provider "registry.terraform.io/hashicorp/null" produced an invalid new value
│ for .triggers["engine_version"]: was cty.StringVal("14.4"), but now cty.StringVal("10.21").
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵
Attached is a log from the operation. terraform-apply.1671226841.log
Faced the same today using provider hashicorp/aws
v5.21.0
and terraform
v1.6.1
. Couldn't not start a major version upgrade on an Aurora PostgreSQL RDS cluster (11.18
-> 14.6
).
│ Error: updating RDS Cluster Instance (xxx-db-instance-0): InvalidParameterCombination: The parameter group default.aurora-postgresql14 with DBParameterGroupFamily aurora-postgresql14 can't be used for this instance. Use a parameter group with DBParameterGroupFamily aurora-postgresql11.
@ewbankkit Have there been any movements here? Any known workarounds?
Same here with v5.11.0. Related: https://github.com/hashicorp/terraform-provider-aws/pull/30247
@ewbankkit Have there been any movements here? Any known workarounds? In my case, I had to reboot the cluster instances, then ran apply again and the upgrade succeeded.
@niels1voo Interesting. When exactly did you try rebooting? Doesn't seem to have any effect for me.
I'm seeing the same issue as @nfantone , using v5.30.0 on Terraform 1.5.7.
On trying to a major version upgrade from engine version 14.3 to 15.3, I see this error
Error: updating RDS Cluster Instance (tf-20231213171635810400000006): InvalidParameterCombination: The parameter group test-aurora-serverless-db1-7378-pg15 with DBParameterGroupFamily aurora-postgresql15 can't be used for this instance. Use a parameter group with DBParameterGroupFamily aurora-postgresql14.
I am consistently seeing this error every time in my test. I didn't seem to have this issue on version 4.67.0 of the provider (I'll double check on that)
(Edit: Actually, sorry, I think the problem I had was that my RDS DB cluster had actually failed to upgrade to 15.3 and stayed at version 14.3 due to not enough memory. So the major version upgrade was due to insufficient memory. So disregard.)
Possibly related:
Ran into this issue with Terraform v1.5.1, and AWS provider 5.19.0 This bug makes it impossible to upgrade your cluster with Terraform!
[!WARNING] This issue has been closed, meaning that any additional comments are hard for our team to see. Please assume that the maintainers will not see them.
Ongoing conversations amongst community members are welcome, however, the issue will be locked after 30 days. Moving conversations to another venue, such as the AWS Provider forum, is recommended. If you have additional concerns, please open a new issue, referencing this one where needed.
This functionality has been released in v5.60.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.
For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Terraform Core Version
1.3.6
AWS Provider Version
4.42.0
Affected Resource(s)
Expected Behavior
Terraform should wait for pending changes to be applied asynchronously
Actual Behavior
Terraform is only monitoring the
<Status>
attribute of theDescribeDBClusters
operation.Some changes, for example changing the database engine major version, AWS applies those changes asynchronously. When Terraform applies the change and then begins polling for the cluster status to be "available", it finds that the status is immediately available because AWS does not apply the changes right away.
While the cluster is still in the
available
state, the pending changes can be found in another attribute,PendingModifiedValues
. Terraform should wait until bothStatus=available
andPendingModifiedValues=[]
.Relevant Error/Panic Output Snippet
No response
Terraform Configuration Files
Steps to Reproduce
postgres_version
variable to a newer major version, such as14.4
and runterraform apply
Debug Output
See attached terraform-apply.1671040201.log
Panic Output
No response
Important Factoids
The function waitDBClusterUpdated may want to consider checking for more than just the cluster Status.
Immediately after initiating a major version upgrade via a call to ModifyDBCluster, the cluster continues to return a status of "available", but also indicates pending modified values:
It would appear that the call to ModifyDBCluster returns immediately, but the changes are applied by AWS asynchronously, which is misleading the AWS provider into thinking modifications are complete when they are not.
I have seen this behavior intermittently; sometimes the major version upgrade succeeds using Terraform, and sometimes I run into an error.
Because Terraform incorrectly thinks that modifications on the cluster have completed, it proceeds to ModifyDBInstance to set a new parameter group, and that fails with the following error:
References
No response
Would you like to implement a fix?
None