hashicorp / terraform-provider-aws

The AWS Provider enables Terraform to manage AWS resources.
https://registry.terraform.io/providers/hashicorp/aws
Mozilla Public License 2.0
9.75k stars 9.11k forks source link

[Bug]: Upgrading RDS version causes "Provider produced inconsistent final plan" error #31529

Open nikkhn opened 1 year ago

nikkhn commented 1 year ago

Terraform Core Version

1.4.6

AWS Provider Version

4.67.0

Affected Resource(s)

aws_rds_cluster_instance

Expected Behavior

When upgrading engine_version to the next major version from 14.6 to 15 (also with the allow_major_version_upgrade = true flag set) I expected the terraform plan to have changed engine_version to 15

Actual Behavior

I received the following error:

Error: Provider produced inconsistent final plan

When expanding the plan for module.app.aws_rds_cluster_instance.instance0 to include new values learned so far during apply, provide
"registry.terraform.io/hashicorp/aws" produced an invalid new value for
.engine_version: was cty.StringVal("15"), but now cty.StringVal("14.6"). 
This is a bug in the provider, which should be reported in the provider's
own issue tracker.

Relevant Error/Panic Output Snippet

# module.app.aws_rds_cluster_instance.instance0 will be updated in-place
  ~ resource "aws_rds_cluster_instance" "instance0" {
      ~ engine_version                        = "14" -> "15"
        id                                    = "tf-###########"
        tags                                  = {}
        # (27 unchanged attributes hidden)
    }

Terraform Configuration Files

resource "aws_rds_cluster" "main" {
  cluster_identifier_prefix = "capp-${var.env}"

  engine                 = "aurora-postgresql"
  engine_mode            = "provisioned"
  engine_version         = "15"
  database_name          = "capp"
  master_username        = var.db_username
  master_password        = var.db_password
  vpc_security_group_ids = [data.aws_security_group.db_security_group.id]
  allow_major_version_upgrade = true

  final_snapshot_identifier = "capp-${var.env}-final"

  kms_key_id        = aws_kms_key.main.arn
  storage_encrypted = true

  db_subnet_group_name = data.aws_db_subnet_group.main_subnet_group.name
  serverlessv2_scaling_configuration {
    max_capacity = 2.0
    min_capacity = 0.5
  }

  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_rds_cluster_instance" "instance0" {
  cluster_identifier   = aws_rds_cluster.main.id
  instance_class       = "db.serverless"
  engine               = aws_rds_cluster.main.engine
  engine_version       = aws_rds_cluster.main.engine_version
  db_subnet_group_name = data.aws_db_subnet_group.main_subnet_group.name

  lifecycle {
    create_before_destroy = true
  }
}

Steps to Reproduce

  1. terraform init
  2. terraform apply -auto-approve -input=false

Debug Output

....
module.app.aws_cloudwatch_metric_alarm.api_targetgroup_4XX_errors: Refreshing state... [id=capp-dev-api-4XX-errors]
module.app.aws_cloudwatch_metric_alarm.api_targetgroup_500_errors: Refreshing state... [id=capp-dev-api-500-errors]
module.app.aws_cloudwatch_metric_alarm.api_targetgroup_healthy_hosts_count: Refreshing state... [id=capp-dev-api-healthy-hosts]
module.app.aws_cloudwatch_dashboard.main: Refreshing state... [id=capp-api-dev]

Note: Objects have changed outside of Terraform

Terraform detected the following changes made outside of Terraform since the
last "terraform apply" which may have affected this plan:

  # module.app.aws_rds_cluster.main has changed
  ~ resource "aws_rds_cluster" "main" {
      ~ engine_version                      = "15" -> "14.6"
        id                                  = "capp-dev#######[118]($link_to_github_run)###########"
        tags                                = {}
        # (39 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

Unless you have made equivalent changes to your configuration, or ignored the
relevant attributes using ignore_changes, the following plan may include
actions to undo or respond to these changes.

─────────────────────────────────────────────────────────────────────────────

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  ~ update in-place
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # module.app.aws_rds_cluster.main will be updated in-place
  ~ resource "aws_rds_cluster" "main" {
      + allow_major_version_upgrade         = true
      ~ engine_version                      = "14.6" -> "15"
        id                                  = "capp-dev#########"
        tags                                = {}
        # (39 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

  # module.app.aws_rds_cluster_instance.instance0 will be updated in-place
  ~ resource "aws_rds_cluster_instance" "instance0" {
      ~ engine_version                        = "14" -> "15"
        id                                    = "tf-#########"
        tags                                  = {}
        # (27 unchanged attributes hidden)
    }

  # module.app.module.notify_slack.module.lambda.null_resource.archive[0] must be replaced
-/+ resource "null_resource" "archive" {
      ~ id       = "725115[136](link_to_github_url)8694715188" -> (known after apply)
      ~ triggers = { # forces replacement
          ~ "timestamp" = "1684783976555610000" -> "1684784011567908000"
            # (1 unchanged element hidden)
        }
    }

Plan: 1 to add, 2 to change, 1 to destroy.
module.app.module.notify_slack.module.lambda.null_resource.archive[0]: Destroying... [id=#######]
module.app.module.notify_slack.module.lambda.null_resource.archive[0]: Destruction complete after 0s
module.app.module.notify_slack.module.lambda.null_resource.archive[0]: Creating...
module.app.module.notify_slack.module.lambda.null_resource.archive[0]: Provisioning with 'local-exec'...
module.app.module.notify_slack.module.lambda.null_resource.archive[0] (local-exec): Executing: ["python3" ".terraform/modules/app.notify_slack.lambda/package.py" "build" "--timestamp" "1684784011567908000" "builds/########[141](link-to-github-run)65699.plan.json"]
module.app.aws_rds_cluster.main: Modifying... [id=capp-#######]
module.app.module.notify_slack.module.lambda.null_resource.archive[0] (local-exec): zip: creating 'builds/57b74ef0d9b50fd61efa5a3f4c9c39d405e73fcee7e33c39b9aed3f914[165](https://github.com/schmidtfutures/common-app-api/actions/runs/5049695265/jobs/9059476235#step:7:166)699.zip' archive
module.app.module.notify_slack.module.lambda.null_resource.archive[0] (local-exec): zip: adding: notify_slack.py
module.app.module.notify_slack.module.lambda.null_resource.archive[0] (local-exec): Created: builds/57b74ef0d9b50fd61efa5a3f4c9c39d405e73fcee7e33c39b9aed3f914165699.zip
module.app.module.notify_slack.module.lambda.null_resource.archive[0]: Creation complete after 0s [id=74053[167](https://github.com/schmidtfutures/common-app-api/actions/runs/5049695265/jobs/9059476235#step:7:168)17692118455]
module.app.aws_rds_cluster.main: Still modifying... [id=capp-dev##########, 10s elapsed]
module.app.aws_rds_cluster.main: Still modifying... [id=capp-dev##########, 20s elapsed]
module.app.aws_rds_cluster.main: Still modifying... [id=capp-dev##########, 30s elapsed]
module.app.aws_rds_cluster.main: Modifications complete after 31s [id=capp-dev##########]
╷
│ Error: Provider produced inconsistent final plan
│ 
│ When expanding the plan for module.app.aws_rds_cluster_instance.instance0
│ to include new values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/aws" produced an invalid new value for
│ .engine_version: was cty.StringVal("15"), but now cty.StringVal("14.6").
│ 
│ This is a bug in the provider, which should be reported in the provider's
│ own issue tracker.

Panic Output

No response

Important Factoids

No response

References

No response

Would you like to implement a fix?

None

github-actions[bot] commented 1 year ago

Community Note

Voting for Prioritization

Volunteering to Work on This Issue

nikkhn commented 1 year ago

Did not mean to close this issue!

gogutza2 commented 1 year ago

+1

ctgdevops commented 10 months ago

+1

adazzi-aurora commented 9 months ago

I just ran into the same issue while upgrading an rds cluster from postgres12 to postgres13. and i am stuck.

╷ │ Error: Provider produced inconsistent final plan │ │ When expanding the plan for │ module.anchore_db.aws_rds_cluster_instance.cluster_instances["1"] to │ include new values learned so far during apply, provider │ "registry.terraform.io/hashicorp/aws" produced an invalid new value for │ .engine_version: was cty.StringVal("13"), but now cty.StringVal("12.12"). │ │ This is a bug in the provider, which should be reported in the provider's │ own issue tracker. ╵ ╷ │ Error: Provider produced inconsistent final plan │ │ When expanding the plan for │ module.anchore_db.aws_rds_cluster_instance.cluster_instances["2"] to │ include new values learned so far during apply, provider │ "registry.terraform.io/hashicorp/aws" produced an invalid new value for │ .engine_version: was cty.StringVal("13"), but now cty.StringVal("12.12"). │ │ This is a bug in the provider, which should be reported in the provider's │ own issue tracker.

marksilcox commented 7 months ago

Seeing this same issue with terraform 1.5.7 and aws provider 5.0 when attempting to upgrade from 14.7 to 15

marksilcox commented 7 months ago

Found that the issue for me was a failure in the pg_upgrade that was visible in the RDS logs. So not a provider issue.

leo-ferlin-sutton commented 5 months ago

@marksilcox Can you give more information? If there's a workaround or solution it would be great to know.

@justinretzolk Do you know if this might get looked at on your side?