mongodb / terraform-provider-mongodbatlas

Terraform MongoDB Atlas Provider: Deploy, update, and manage MongoDB Atlas infrastructure as code through HashiCorp Terraform
https://registry.terraform.io/providers/mongodb/mongodbatlas
Mozilla Public License 2.0
242 stars 169 forks source link

Random change in region_configs order of mongodbatlas_advanced_cluster #1204

Closed balazs92117 closed 1 year ago

balazs92117 commented 1 year ago

Terraform CLI and Terraform MongoDB Atlas Provider Version

Terraform v1.4.6
on linux_amd64
+ provider registry.terraform.io/hashicorp/azurerm v3.59.0
+ provider registry.terraform.io/hashicorp/random v3.5.1
+ provider registry.terraform.io/mongodb/mongodbatlas v1.6.1

Terraform Configuration File

locals {
  mongodb_regions = {
    germanywestcentral = {
      electable_nodes = 1
      priority        = 6
      atlasregion     = "GERMANY_WEST_CENTRAL"
    },
    westeurope = {
      electable_nodes = 1
      priority        = 7
      atlasregion     = "EUROPE_WEST"
    },
    northeurope = {
      electable_nodes = 1
      priority        = 5
      atlasregion     = "EUROPE_NORTH"
    },
  }
}
resource "mongodbatlas_advanced_cluster" "main" {
  name         = format("name-%s", terraform.workspace)
  project_id   = mongodbatlas_project.main.id
  cluster_type = "REPLICASET"

  replication_specs {
    dynamic "region_configs" {
      for_each = local.mongodb_regions
      content {
        electable_specs {
          instance_size = "M20"
          node_count    = region_configs.value["electable_nodes"]
        }
        provider_name = "AZURE"
        priority      = region_configs.value["priority"]
        region_name   = region_configs.value["atlasregion"]
      }
    }
  }

  labels {
    key   = "environment"
    value = terraform.workspace
  }
}

Steps to Reproduce

If I run a terraform plan, it tries to switch the region_configs:

Terraform will perform the following actions:

  # mongodbatlas_advanced_cluster.main will be updated in-place
  ~ resource "mongodbatlas_advanced_cluster" "main" {
        id                             = "XXXX"
        name                           = "name-dev"
        # (16 unchanged attributes hidden)

      ~ replication_specs {
            id           = "XXX"
            # (3 unchanged attributes hidden)

          ~ region_configs {
              ~ priority      = 7 -> 6
              ~ region_name   = "EUROPE_WEST" -> "GERMANY_WEST_CENTRAL"
                # (1 unchanged attribute hidden)

                # (1 unchanged block hidden)
            }
          ~ region_configs {
              ~ priority      = 6 -> 5
              ~ region_name   = "GERMANY_WEST_CENTRAL" -> "EUROPE_NORTH"
                # (1 unchanged attribute hidden)

                # (1 unchanged block hidden)
            }
          ~ region_configs {
              ~ priority      = 5 -> 7
              ~ region_name   = "EUROPE_NORTH" -> "EUROPE_WEST"
                # (1 unchanged attribute hidden)

                # (1 unchanged block hidden)
            }
        }

        # (4 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Expected Behavior

Terraform shouldn't change anything, since nothing changed in the TF config files.

Actual Behavior

TF doesn't recognize the correct region_configs order inside dynamic block.

Additional Context

It started recently, after upgraded provider from 1.6.1 to 1.9.0. I downgraded back to 1.6.1 and it was good again. After I upgraded to 1.9.0 again, and deleted the resource form the state fil and imported it, I received the same error. Even when I downgraded to 1.6.1 again (and did a reimport with this version). I managed to test it with an almost identical environment, and it works with 1.7.0, but it fails with 1.8.0 so something in ˙1.8.0` did changed. I checked https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/website/docs/guides/1.8.0-upgrade-guide.html.markdown but I didn't find anything related to this issue.

github-actions[bot] commented 1 year ago

Thanks for opening this issue. The ticket INTMDB-855 was created for internal tracking.

balazs92117 commented 1 year ago

If I re-import with 1.7.0 version, and plan it, it wants to change everything:

Terraform will perform the following actions:

  # mongodbatlas_advanced_cluster.main will be updated in-place
  ~ resource "mongodbatlas_advanced_cluster" "main" {
        id                             = "XXXX"
        name                           = "name-dev"
        # (16 unchanged attributes hidden)

      - replication_specs {
          - container_id = {
              - "AZURE:EUROPE_NORTH"         = "XXX3236"
              - "AZURE:EUROPE_WEST"          = "XXX3238"
              - "AZURE:GERMANY_WEST_CENTRAL" = "XXX3237"
            } -> null
          - id           = "XXX322d" -> null
          - num_shards   = 1 -> null
          - zone_name    = "ZoneName managed by Terraform" -> null

          - region_configs {
              - priority      = 5 -> null
              - provider_name = "AZURE" -> null
              - region_name   = "EUROPE_NORTH" -> null

              - analytics_specs {
                  - disk_iops     = 0 -> null
                  - instance_size = "M20" -> null
                  - node_count    = 0 -> null
                }

              - auto_scaling {
                  - compute_enabled            = false -> null
                  - compute_scale_down_enabled = false -> null
                  - disk_gb_enabled            = false -> null
                }

              - electable_specs {
                  - disk_iops     = 0 -> null
                  - instance_size = "M20" -> null
                  - node_count    = 1 -> null
                }

              - read_only_specs {
                  - disk_iops     = 0 -> null
                  - instance_size = "M20" -> null
                  - node_count    = 0 -> null
                }
            }
          - region_configs {
              - priority      = 6 -> null
              - provider_name = "AZURE" -> null
              - region_name   = "GERMANY_WEST_CENTRAL" -> null

              - analytics_specs {
                  - disk_iops     = 0 -> null
                  - instance_size = "M20" -> null
                  - node_count    = 0 -> null
                }

              - auto_scaling {
                  - compute_enabled            = false -> null
                  - compute_scale_down_enabled = false -> null
                  - disk_gb_enabled            = false -> null
                }

              - electable_specs {
                  - disk_iops     = 0 -> null
                  - instance_size = "M20" -> null
                  - node_count    = 1 -> null
                }

              - read_only_specs {
                  - disk_iops     = 0 -> null
                  - instance_size = "M20" -> null
                  - node_count    = 0 -> null
                }
            }
          - region_configs {
              - priority      = 7 -> null
              - provider_name = "AZURE" -> null
              - region_name   = "EUROPE_WEST" -> null

              - analytics_specs {
                  - disk_iops     = 0 -> null
                  - instance_size = "M20" -> null
                  - node_count    = 0 -> null
                }

              - auto_scaling {
                  - compute_enabled            = false -> null
                  - compute_scale_down_enabled = false -> null
                  - disk_gb_enabled            = false -> null
                }

              - electable_specs {
                  - disk_iops     = 0 -> null
                  - instance_size = "M20" -> null
                  - node_count    = 1 -> null
                }

              - read_only_specs {
                  - disk_iops     = 0 -> null
                  - instance_size = "M20" -> null
                  - node_count    = 0 -> null
                }
            }
        }
      + replication_specs {
          + container_id = (known after apply)
          + id           = (known after apply)
          + num_shards   = 1
          + zone_name    = "ZoneName managed by Terraform"

          + region_configs {
              + priority      = 5
              + provider_name = "AZURE"
              + region_name   = "EUROPE_NORTH"

              + electable_specs {
                  + instance_size = "M20"
                  + node_count    = 1
                }
            }
          + region_configs {
              + priority      = 6
              + provider_name = "AZURE"
              + region_name   = "GERMANY_WEST_CENTRAL"

              + electable_specs {
                  + instance_size = "M20"
                  + node_count    = 1
                }
            }
          + region_configs {
              + priority      = 7
              + provider_name = "AZURE"
              + region_name   = "EUROPE_WEST"

              + electable_specs {
                  + instance_size = "M20"
                  + node_count    = 1
                }
            }
        }

      - timeouts {}

        # (3 unchanged blocks hidden)
    }
maastha commented 1 year ago

Hi @balazs92117

The region_configs attribute in the provider is a TypeList (in v1.9.0), hence order of items is maintained in the plugin. What is causing this random ordering is the use of dynamic and iterating over a map which cannot confirm iteration order.

The random ordering will not happen if you use a static list in your resource instead.

In order to address the change in ordering, if use of dynamic is essential to your use-case, I suggest using a type = list() variable as shown in the example below:

variable "region_configs_list" {
  description = "List of region_configs"
  type = list(object({
    provider_name = string
    priority      = number
    region_name   = string
    electable_specs = list(object({
      instance_size = string
      node_count    = number
    }))
  }))
  default = [{
    provider_name = "AWS",
    priority      = 7,
    region_name   = "US_EAST_1",
    electable_specs = [{
      instance_size = "M20"
      node_count    = 1
    }]
    }
  ]
}
balazs92117 commented 1 year ago

Thank you for the investigation. With the suggested modification it works now.