mongodb / terraform-provider-mongodbatlas

Terraform MongoDB Atlas Provider: Deploy, update, and manage MongoDB Atlas infrastructure as code through HashiCorp Terraform
https://registry.terraform.io/providers/mongodb/mongodbatlas
Mozilla Public License 2.0
230 stars 167 forks source link

[Bug]: yearly snapshot with terraform #2326

Closed Kikivsantos closed 2 weeks ago

Kikivsantos commented 3 weeks ago

Is there an existing issue for this?

Provider Version

v1.16.2

Terraform Version

latest

Terraform Edition

Terraform Open Source (OSS)

Current Behavior

I'm changing the snapshot retention from weekly = 1 week -> To 4 weeks monthly = 60 months -> To 12 months

The changes on the weekly and monthly snapshots works fine. but the yearly snapshot is not comming in the plan as it should.


Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # mongodbatlas_cloud_backup_schedule.prod[0] will be updated in-place
  ~ resource "mongodbatlas_cloud_backup_schedule" "prod" {
        id                                       = "Y2x1c3Rlcl9uYW1l:ZW5nLXByZC1tYXMtZGItbW9uZ28tMjEwLTAx-cHJvamVjdF9pZA==:NWY5YWUwOWY2MjNkMmUyOTQzZWU4ZGQ3"
        # (11 unchanged attributes hidden)

      ~ policy_item_monthly {
            id                 = "63de2d3b4e7dee150a89f154"
          ~ retention_value    = 60 -> 12
            # (3 unchanged attributes hidden)
        }

      ~ policy_item_weekly {
            id                 = "63de2d3b4e7dee150a89f153"
          ~ retention_value    = 1 -> 4
            # (3 unchanged attributes hidden)
        }

        # (2 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Terraform configuration to reproduce the issue

### main.tf:

locals {
  provider_name                                           = var.provider_name == null ? "GCP" : var.provider_name
  provider_region                                         = var.provider_region == null ? "CENTRAL_US" : var.provider_region
  zone_name                                               = var.zone_name == null ? "${local.provider_name}-${local.provider_region}" : var.zone_name
  environment_name                                        = "${terraform.workspace}"
  project_name                                            = "${terraform.workspace}" == "master" ? "production" : "${terraform.workspace}"

  required_tags                                           = {
      env                                                 = local.environment_name
      infrastructure_owner                                = "DBRE"
  }

  tags                                                  = merge(local.required_tags, var.resource_tags)

  env                                                     = {
      master                                              = "prd",
      staging                                             = "stg",
      develop                                             = "dev"
  }

  env_type                                                = {
      master                                              = "prod",
      staging                                             = "nonprod",
      develop                                             = "nonprod"
  }
}

# ------------------------------------------------------------------------------
# MONGODB CLUSTER
# ------------------------------------------------------------------------------
resource "mongodbatlas_advanced_cluster" "default" {
  project_id                                              = data.mongodbatlas_project.default.id
  name                                                    = var.old_cluster_bool ? var.old_cluster_name[local.env[terraform.workspace]].value : "mgo-${local.env[terraform.workspace]}-${var.cluster_name}" 
  cluster_type                                            = var.cluster_type
  backup_enabled                                          = var.backup_enabled
  pit_enabled                                             = "${terraform.workspace}" == "master" ? true : false
  mongo_db_major_version                                  = var.mongo_db_major_version[local.env[terraform.workspace]].value
  disk_size_gb                                            = var.disk_size_gb[local.env[terraform.workspace]].value

  advanced_configuration {
      fail_index_key_too_long                             = var.fail_index_key_too_long
      javascript_enabled                                  = var.javascript_enabled
      minimum_enabled_tls_protocol                        = var.minimum_enabled_tls_protocol
      #no_table_scan                                       = terraform.workspace == "master" ? false : true
      no_table_scan                                       = var.no_table_scan
  }

  replication_specs {
      num_shards                                          = var.cluster_type == "REPLICASET" ? null : var.num_shards[local.env[terraform.workspace]].value

      dynamic "region_configs" {
          for_each                                        = var.regions_config[local.env[terraform.workspace]]
          content{
              electable_specs {
                  instance_size                           = region_configs.value.electable_specs.instance_size
                  node_count                              = region_configs.value.electable_specs.node_count
              }
              auto_scaling {
                  disk_gb_enabled                         = region_configs.value.auto_scaling.disk_gb_enabled
                  compute_enabled                         = region_configs.value.auto_scaling.compute_enabled
                  compute_scale_down_enabled              = region_configs.value.auto_scaling.compute_scale_down_enabled
                  compute_min_instance_size               = region_configs.value.auto_scaling.compute_min_instance_size
                  compute_max_instance_size               = region_configs.value.auto_scaling.compute_max_instance_size
              }
              provider_name                               = region_configs.value.provider_name
              priority                                    = region_configs.value.region_priority
              region_name                                 = region_configs.value.region_name
          }
      }
  }

  dynamic "tags" {
      for_each                                            = local.tags
      content {
          key                                             = tags.key
          value                                           = tags.value
      }
  }

  bi_connector_config {
      enabled                                             = var.bi_connector_enabled
      read_preference                                     = var.bi_connector_read_preference
  }

  lifecycle {
      ignore_changes                                      = [
          replication_specs[0].region_configs[0].electable_specs[0].instance_size,
          paused
      ]
  }
}

# ------------------------------------------------------------------------------
# MONGODB BACKUP SCHEDULE PROD
# ------------------------------------------------------------------------------
resource "mongodbatlas_cloud_backup_schedule" "prod" {
  count                                                   = var.backup_enabled && terraform.workspace == "master" ? 1 : 0
  project_id                                              = data.mongodbatlas_project.default.id
  cluster_name                                            = mongodbatlas_advanced_cluster.default.name 

  reference_hour_of_day                                   = var.reference_hour_of_day
  reference_minute_of_hour                                = var.reference_minute_of_hour
  restore_window_days                                     = var.restore_window_days
  update_snapshots                                        = false

  dynamic "policy_item_hourly" {
      for_each = var.policy_item_hourly[local.env_type[terraform.workspace]].frequency_interval == null ? toset([]) : toset([1])

      content {
          frequency_interval  = try(var.policy_item_hourly[local.env_type[terraform.workspace]].frequency_interval, null)
          retention_unit      = try(var.policy_item_hourly[local.env_type[terraform.workspace]].retention_unit, null)
          retention_value     = try(var.policy_item_hourly[local.env_type[terraform.workspace]].retention_value, null)
      }

  }

  dynamic "policy_item_daily" {
      for_each = var.policy_item_daily[local.env_type[terraform.workspace]].frequency_interval == null ? toset([]) : toset([1])

      content {
          frequency_interval  = try(var.policy_item_daily[local.env_type[terraform.workspace]].frequency_interval, null)
          retention_unit      = try(var.policy_item_daily[local.env_type[terraform.workspace]].retention_unit, null)
          retention_value     = try(var.policy_item_daily[local.env_type[terraform.workspace]].retention_value, null)
      }

  }

  dynamic "policy_item_weekly" {
      for_each = var.policy_item_weekly[local.env_type[terraform.workspace]].frequency_interval == null ? toset([]) : toset([1])

      content {
          frequency_interval  = try(var.policy_item_weekly[local.env_type[terraform.workspace]].frequency_interval, null)
          retention_unit      = try(var.policy_item_weekly[local.env_type[terraform.workspace]].retention_unit, null)
          retention_value     = try(var.policy_item_weekly[local.env_type[terraform.workspace]].retention_value, null)
      }

  }

  dynamic "policy_item_monthly" {
      for_each = var.policy_item_monthly[local.env_type[terraform.workspace]].frequency_interval == null ? toset([]) : toset([1])

      content {
          frequency_interval  = try(var.policy_item_monthly[local.env_type[terraform.workspace]].frequency_interval, null)
          retention_unit      = try(var.policy_item_monthly[local.env_type[terraform.workspace]].retention_unit, null)
          retention_value     = try(var.policy_item_monthly[local.env_type[terraform.workspace]].retention_value, null)
      }

  }

  dynamic "policy_item_yearly" {
      for_each = var.policy_item_yearly[local.env_type[terraform.workspace]].frequency_interval == null ? toset([]) : toset([1])

      content {
          frequency_interval  = try(var.policy_item_yearly[local.env_type[terraform.workspace]].frequency_interval, null)
          retention_unit      = try(var.policy_item_yearly[local.env_type[terraform.workspace]].retention_unit, null)
          retention_value     = try(var.policy_item_yearly[local.env_type[terraform.workspace]].retention_value, null)
      }

  }

  depends_on = [mongodbatlas_advanced_cluster.default]
}

# ------------------------------------------------------------------------------
# MONGODB BACKUP SCHEDULE NONPROD
# ------------------------------------------------------------------------------
resource "mongodbatlas_cloud_backup_schedule" "nonprod" {

  count                                                   = var.backup_enabled && terraform.workspace != "master" ? 1 : 0
  project_id                                              = data.mongodbatlas_project.default.id
  cluster_name                                            = mongodbatlas_advanced_cluster.default.name 

  reference_hour_of_day                                   = 21
  reference_minute_of_hour                                = 0
  restore_window_days                                     = 1
  update_snapshots                                        = false

  dynamic "policy_item_daily" {
      for_each = var.policy_item_daily[local.env_type[terraform.workspace]].frequency_interval == null ? toset([]) : toset([1])

      content {
          frequency_interval  = try(var.policy_item_daily[local.env_type[terraform.workspace]].frequency_interval, null)
          retention_unit      = try(var.policy_item_daily[local.env_type[terraform.workspace]].retention_unit, null)
          retention_value     = try(var.policy_item_daily[local.env_type[terraform.workspace]].retention_value, null)
      }

  }

  depends_on                                              = [mongodbatlas_advanced_cluster.default]
}

variable.tf:


variable "provider_name" {
  description = <<HEREDOC
  Optional - Cloud service provider on which the servers are provisioned. The possible 
  values are: AWS for Amazon, GCP for Google Cloud and, AZURE for Microsoft Azure.
  HEREDOC
  default     = null
}

variable "provider_region" {
  description = <<HEREDOC
  Optional - Physical location of your MongoDB cluster. The region you choose can affect
  network latency for clients accessing your databases. Requires the Atlas region name, 
  see the reference list for AWS, GCP, Azure. Do not specify this field when creating a 
  multi-region cluster using the replicationSpec document or a Global Cluster with the 
  replicationSpecs array.
  HEREDOC
  default     = null
}

variable "cluster_name" {
  description = <<HEREDOC
  Optional - Name of the cluster as it appears in Atlas. Once the cluster is created, its
  name cannot be changed.
  HEREDOC
  type        = string
}

variable "old_cluster_name" {
  description = <<HEREDOC
  Optional - Name of the cluster as it appears in Atlas. Once the cluster is created, its
  name cannot be changed.
  HEREDOC
  type        = map(object({
      value = string
  }))
  default = {
      prd = {
          value = "PROD"
      }
      stg = {
          value = "STG"
      }
      dev = {
          value = "DEV"
      }
  }
}

variable "old_cluster_bool" {
  description = <<HEREDOC
  Optional - Specifies if the cluster is an old cluster.
  HEREDOC
  type        = bool
  default     = true
}

variable "cluster_type" {
  description = <<HEREDOC
  Optional - Specifies the type of the cluster that you want to modify. You cannot convert
  a sharded cluster deployment to a replica set deployment. Accepted values include:
  REPLICASET for Replica set, SHARDED for Sharded cluster, and GEOSHARDED for Global Cluster
  HEREDOC
  default     = "REPLICASET"
}

variable "mongo_db_major_version" {
  description = <<HEREDOC
  Optional - Capacity, in gigabytes, of the host’s root volume. Increase this
  number to add capacity, up to a maximum possible value of 4096 (i.e., 4 TB). This value must
  be a positive integer. If you specify diskSizeGB with a lower disk size, Atlas defaults to
  the minimum disk size value. Note: The maximum value for disk storage cannot exceed 50 times
  the maximum RAM for the selected cluster. If you require additional storage space beyond this
  limitation, consider upgrading your cluster to a higher tier.
  HEREDOC
      type        = map(object({
      value = string
  }))
  default = {
      prd = {
          value = "7.0"
      }
      stg = {
          value = "7.0"
      }
      dev = {
          value = "7.0"
      }
  }
}

variable "version_release_system" {
  description = <<HEREDOC
  Optional - Release cadence that Atlas uses for this cluster. This parameter defaults to LTS. 
  If you set this field to CONTINUOUS, you must omit the mongo_db_major_version field. Atlas accepts:
  CONTINUOUS - Atlas deploys the latest version of MongoDB available for the cluster tier.
  LTS - Atlas deploys the latest Long Term Support (LTS) version of MongoDB available for the cluster tier.
  HEREDOC
  default     = "LTS"
}

variable "num_shards" {
  description = <<HEREDOC
  Optional - Number of shards, minimum 1.
  The default is 1.    
  HEREDOC
  type        = map(object({
      value = number
  }))
  default = {
      prd = {
          value = 1
      }
      stg = {
          value = 1
      }
      dev = {
          value = 1
      }
  }
}

variable "instance_size" {
  description = <<HEREDOC
  Optional - Atlas provides different instance sizes, each with a default storage capacity and RAM size.
  The instance size you select is used for all the data-bearing servers in your cluster.
  HEREDOC
  default = "M30"
}

variable "compute_enabled" {
  description = <<HEREDOC
  Optional - Specifies whether cluster tier auto-scaling is enabled. The default is true.
  IMPORTANT: If compute_enabled is true, then Atlas will 
  automatically scale up to the maximum provided and down to the minimum, if provided.
  HEREDOC
  type        = bool
  default     = true
}

variable "compute_scale_down_enabled" {
  description = <<HEREDOC
  Optional - Set to true to enable the cluster tier to scale down. This option is only available
  if compute_enabled is true. The default is true.
  HEREDOC
  type        = bool
  default     = true
}

variable "compute_min_instance_size" {
  description = <<HEREDOC
  Optional - Minimum instance size to which your cluster can automatically scale (e.g., M10).
  The default is "M30".
  HEREDOC
  default     = "M30"
}

variable "compute_max_instance_size" {
  description = <<HEREDOC
  Optional - Maximum instance size to which your cluster can automatically scale (e.g., M40).
  The default is "M80".
  HEREDOC
  default     = "M80"
}

variable "disk_size_gb" {
  description = <<HEREDOC
  Optional - Capacity, in gigabytes, of the host’s root volume. Increase this
  number to add capacity, up to a maximum possible value of 4096 (i.e., 4 TB). This value must
  be a positive integer. If you specify diskSizeGB with a lower disk size, Atlas defaults to
  the minimum disk size value. Note: The maximum value for disk storage cannot exceed 50 times
  the maximum RAM for the selected cluster. If you require additional storage space beyond this
  limitation, consider upgrading your cluster to a higher tier.
  HEREDOC
      type        = map(object({
      value = number
  }))
  default = {
      prd = {
          value = 10
      }
      stg = {
          value = 10
      }
      dev = {
          value = 10
      }
  }
}

variable "backup_enabled" {
  description = <<HEREDOC
  Optional - Flag indicating if the cluster uses Cloud Backup for backups. If true, the cluster
  uses Cloud Backup for backups. The default is true.
  HEREDOC
  type        = bool
  default     = true
}

variable "pit_enabled" {
  description = <<HEREDOC
  Optional - Flag that indicates if the cluster uses Continuous Cloud Backup. If set to true,
  backup_enabled must also be set to true. The default is true.
  HEREDOC
  type        = bool
  default     = true
}

variable "disk_gb_enabled" {
  description = <<HEREDOC
  Optional - Specifies whether disk auto-scaling is enabled. The default is true.
  HEREDOC
  type        = bool
  default     = true
}

variable "fail_index_key_too_long" {
  description = <<HEREDOC
  Optional - When true, documents can only be updated or inserted if, for all indexed fields
  on the target collection, the corresponding index entries do not exceed 1024 bytes.
  When false, mongod writes documents that exceed the limit but does not index them.
  HEREDOC
  type        = bool
  default     = false
}

variable "javascript_enabled" {
  description = <<HEREDOC
  Optional - When true, the cluster allows execution of operations that perform server-side
  executions of JavaScript. When false, the cluster disables execution of those operations.
  HEREDOC
  type        = bool
  default     = true
}

variable "minimum_enabled_tls_protocol" {
  description = <<HEREDOC
  Optional - Sets the minimum Transport Layer Security (TLS) version the cluster accepts for
  incoming connections. Valid values are: TLS1_0, TLS1_1, TLS1_2. The default is "TLS1_2".
  HEREDOC
  default     = "TLS1_2"
}

variable "no_table_scan" {
  description = <<HEREDOC
  Optional - When true, the cluster disables the execution of any query that requires a collection
  scan to return results. When false, the cluster allows the execution of those operations.
  HEREDOC
  type        = bool
  default     = false
}

variable "zone_name" {
  description = <<HEREDOC
  Optional - Name for the zone in a Global Cluster.
  HEREDOC
  default     = null
}

variable "regions_config" {
  description = <<HEREDOC
  Required - Physical location of the region. Each regionsConfig document describes
  the region’s priority in elections and the number and type of MongoDB nodes Atlas
  deploys to the region. You can be set that parameters:

  - region_name - Optional - Physical location of your MongoDB cluster. The region
  you choose can affect network latency for clients accessing your databases.

  - electable_nodes - Optional - Number of electable nodes for Atlas to deploy to the
  region. Electable nodes can become the primary and can facilitate local reads.
  The total number of electableNodes across all replication spec regions must total 3,
  5, or 7. Specify 0 if you do not want any electable nodes in the region. You cannot
  create electable nodes in a region if priority is 0.

  - priority - Optional - Election priority of the region. For regions with only read-only
  nodes, set this value to 0. For regions where electable_nodes is at least 1, each region
  must have a priority of exactly one (1) less than the previous region. The first region 
  must have a priority of 7. The lowest possible priority is 1. The priority 7 region 
  identifies the Preferred Region of the cluster. Atlas places the primary node in the 
  Preferred Region. Priorities 1 through 7 are exclusive - no more than one region per 
  cluster can be assigned a given priority. Example: If you have three regions, their 
  priorities would be 7, 6, and 5 respectively. If you added two more regions for supporting
  electable nodes, the priorities of those regions would be 4 and 3 respectively.

  - read_only_nodes - Optional - Number of read-only nodes for Atlas to deploy to the region.
  Read-only nodes can never become the primary, but can facilitate local-reads. Specify 0 if
  you do not want any read-only nodes in the region.

  - analytics_nodes - Optional - The number of analytics nodes for Atlas to deploy to the region.
  Analytics nodes are useful for handling analytic data such as reporting queries from BI 
  Connector for Atlas. Analytics nodes are read-only, and can never become the primary. If you do
  not specify this option, no analytics nodes are deployed to the region.
  HEREDOC
  type        = map(list(any))
}

variable "resource_tags" {
  description = <<HEREDOC
  Optional - Key-value pairs that tag and categorize the cluster. Each key and value has a
  maximum length of 255 characters. You cannot set the key Infrastructure Tool, it is used
  for internal purposes to track aggregate usage.
  HEREDOC
  type        = map(string)
  default = {}
}

variable "bi_connector_enabled" {
  description = <<HEREDOC
  Optional - Specifies whether or not BI Connector for Atlas is enabled on the cluster.
  Set to true to enable BI Connector for Atlas. Set to false to disable BI Connector for Atlas.
  HEREDOC
  type        = bool
  default     = false
}
variable "bi_connector_read_preference" {
  description = <<HEREDOC
  Optional - Specifies the read preference to be used by BI Connector for Atlas on the cluster.
  Each BI Connector for Atlas read preference contains a distinct combination of readPreference
  and readPreferenceTags options. For details on BI Connector for Atlas read preferences, refer
  to the BI Connector Read Preferences Table.
  Set to "primary" to have BI Connector for Atlas read from the primary. Set to "secondary" to 
  have BI Connector for Atlas read from a secondary member. Default if there are no analytics 
  nodes in the cluster. Set to "analytics" to have BI Connector for Atlas read from an analytics
  node. Default if the cluster contains analytics nodes.
  HEREDOC
  type        = string
  default     = "secondary"
}

# ------------------------------------------------------------------------------
# MONGODB BACKUP SCHEDULE
# ------------------------------------------------------------------------------
variable "policy_item_hourly" {
  description = <<HEREDOC
  Optional - Specifies the backup policy for the cluster. Each backup policy contains a
  distinct combination of frequency, pointInTimeWindowHours, and retentionDays options.
  For details on backup policies, refer to the Backup Policies Table.
  HEREDOC
  type        = map(object({
      frequency_interval = optional(number)
      retention_unit     = optional(string)
      retention_value    = optional(number)
  }))
  default = {
      prod = {
          frequency_interval          = 1
          retention_unit              = "days"
          retention_value             = 1
      }
      nonprod = {
          frequency_interval          = 1
          retention_unit              = "days"
          retention_value             = 1
      }
  }
}

variable "policy_item_daily" {
  description = <<HEREDOC
  Optional - Specifies the backup policy for the cluster. Each backup policy contains a
  distinct combination of frequency, pointInTimeWindowHours, and retentionDays options.
  For details on backup policies, refer to the Backup Policies Table.
  HEREDOC
  type        = map(object({
      frequency_interval = optional(number)
      retention_unit     = optional(string)
      retention_value    = optional(number)
  }))
  default = {
      prod = {
          frequency_interval          = 1
          retention_unit              = "days"
          retention_value             = 7
      }
      nonprod = {
          frequency_interval          = 1
          retention_unit              = "days"
          retention_value             = 5
      }
  }
}

variable "policy_item_weekly" {
  description = <<HEREDOC
  Optional - Specifies the backup policy for the cluster. Each backup policy contains a
  distinct combination of frequency, pointInTimeWindowHours, and retentionDays options.
  For details on backup policies, refer to the Backup Policies Table.
  HEREDOC
  type        = map(object({
      frequency_interval = optional(number)
      retention_unit     = optional(string)
      retention_value    = optional(number)
  }))
  default = {
      prod = {
          frequency_interval          = 6
          retention_unit              = "weeks"
          retention_value             = 4
      }
      nonprod = {
          frequency_interval          = 1
          retention_unit              = "weeks"
          retention_value             = 1
      }
  }
}

variable "policy_item_monthly" {
  description = <<HEREDOC
  Optional - Specifies the backup policy for the cluster. Each backup policy contains a
  distinct combination of frequency, pointInTimeWindowHours, and retentionDays options.
  For details on backup policies, refer to the Backup Policies Table.
  HEREDOC
  type        = map(object({
      frequency_interval = optional(number)
      retention_unit     = optional(string)
      retention_value    = optional(number)
  }))
  default = {
      prod = {
          frequency_interval          = 40
          retention_unit              = "months"
          retention_value             = 3
      }
      nonprod = {
          frequency_interval          = 1
          retention_unit              = "months"
          retention_value             = 1
      }
  }
}

variable "policy_item_yearly" {
  description = <<HEREDOC
  Optional - Specifies the backup policy for the cluster. Each backup policy contains a
  distinct combination of frequency, pointInTimeWindowHours, and retentionDays options.
  For details on backup policies, refer to the Backup Policies Table.
  HEREDOC
  type        = map(object({
      frequency_interval = optional(number)
      retention_unit     = optional(string)
      retention_value    = optional(number)
  }))
  default = {
      prod = {
          frequency_interval          = 1
          retention_unit              = "years"
          retention_value             = 5
      }
      nonprod = {
          frequency_interval          = 1
          retention_unit              = "years"
          retention_value             = 1
      }
  }
}
variable "reference_hour_of_day" {
  description = <<HEREDOC
  UTC Hour of day between 0 and 23, inclusive, 
  representing which hour of the day that Atlas takes snapshots for backup policy items.
  HEREDOC
  type        = number
  default     = 1
}

variable "reference_minute_of_hour" {
  description = <<HEREDOC
  UTC Minutes after reference_hour_of_day that Atlas takes snapshots for backup policy items. 
  Must be between 0 and 59, inclusive.
  HEREDOC
  type        = number
  default     = 0
}

variable "restore_window_days" {
  description = <<HEREDOC
  Number of days back in time you can restore to with point-in-time accuracy. 
  Must be a positive, non-zero integer
  HEREDOC
  type        = number
  default     = 1
}

terragrunt.hcl (calls the terraform module):


include {
  path                                = find_in_parent_folders()
}

locals {
  component_name = "modules/cluster"
  component_version = "v1.5.0"
  cluster_vars                        = read_terragrunt_config(find_in_parent_folders("cluster.hcl")).locals
}

inputs                                = {
  no_table_scan                       = false
  cluster_name                        = local.cluster_vars.old_cluster
  old_cluster_name                    = local.cluster_vars.old_cluster
  cluster_type                        = "SHARDED"
  num_shards                          = { prd = { value = 1 }, stg = { value = 1 }, dev = { value = 1 } }
  disk_size_gb                        = { prd = { value = 2800 }, stg = { value = 310 }, dev = { value = 40 } }
  mongo_db_major_version              = { prd = { value = "7.0" }, stg = { value = "7.0" }, dev = { value = "7.0" } }

  regions_config                      = {
    prd                               = [
      {
        region_name                   = "SOUTH_AMERICA_EAST_1"
        provider_name                 = "GCP"
        region_priority               = 7

        electable_specs               = {
          instance_size               = "M40"
          node_count                  = 3
        }

        auto_scaling                  = {
          compute_min_instance_size   = "M40"
          compute_max_instance_size   = "M60"
          disk_gb_enabled             = true
          compute_enabled             = true
          compute_scale_down_enabled  = true
        }
      }
    ]
    stg                               = [
      {
        region_name                   = "CENTRAL_US"
        provider_name                 = "GCP"
        region_priority               = 7

        electable_specs               = {
          instance_size               = "M30"
          node_count                  = 3
        }

        auto_scaling                  = {
          compute_min_instance_size   = "M30"
          compute_max_instance_size   = "M40"
          disk_gb_enabled             = true
          compute_enabled             = true
          compute_scale_down_enabled  = true
        }
      }
    ]
    dev                               = [
      {
        region_name                   = "CENTRAL_US"
        provider_name                 = "GCP"
        region_priority               = 7

        electable_specs               = {
          instance_size               = "M30"
          node_count                  = 3
        }

        auto_scaling                  = {
          compute_min_instance_size   = null
          compute_max_instance_size   = null
          disk_gb_enabled             = true
          compute_enabled             = false
          compute_scale_down_enabled  = false
        }
      }
    ]
  }

  reference_hour_of_day               = 01
  reference_minute_of_hour            = 20
  restore_window_days                 = 1

  policy_item_hourly                  = {
    prod                              = {
        frequency_interval            = 1
        retention_unit                = "days"
        retention_value               = 1
    }
  }

  policy_item_daily                   = {
    prod                              = {
        frequency_interval            = 1
        retention_unit                = "days"
        retention_value               = 7
    }
    nonprod                           = {
        frequency_interval            = 1
        retention_unit                = "days"
        retention_value               = 5
    }
  }

  policy_item_weekly                  = {
    prod                              = {
        frequency_interval            = 1
        retention_unit                = "weeks"
        retention_value               = 4
    }
  }

  policy_item_monthly                 = {
    prod                              = {
        frequency_interval            = 40
        retention_unit                = "months"
        retention_value               = 12
    }
  }

  policy_item_yearly                  = {
    prod                              = {
        frequency_interval            = 1
        retention_unit                = "years"
        retention_value               = 5
    }
  }

  resource_tags                       = {
    service-owner                     = "Boundary-Apps"
    service-name                      = "receivables-210"
  }
} 

### Steps To Reproduce

terragrunt init

terragrunt plan

### Logs

```txt
## terragunt init:

Initializing the backend...

Successfully configured the backend "gcs"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Finding latest version of mongodb/mongodbatlas...
- Finding latest version of hashicorp/google...
- Installing mongodb/mongodbatlas v1.16.2...
- Installed mongodb/mongodbatlas v1.16.2 (signed by a HashiCorp partner, key ID 2A32ED1F3AD25ABF)
- Installing hashicorp/google v5.32.0...
- Installed hashicorp/google v5.32.0 (signed by HashiCorp)

Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

terragrunt plan:

data.mongodbatlas_project.default: Reading...
data.mongodbatlas_project.default: Read complete after 2s [id=5f9ae09f623d2e2943ee8dd7]
mongodbatlas_advanced_cluster.default: Refreshing state... [id=Y2x1c3Rlcl9pZA==:NWZkY2RkNjY2MWI0MGMyOTgwZDM0ZjI5-Y2x1c3Rlcl9uYW1l:ZW5nLXByZC1tYXMtZGItbW9uZ28tMjEwLTAx-cHJvamVjdF9pZA==:NWY5YWUwOWY2MjNkMmUyOTQzZWU4ZGQ3]
mongodbatlas_cloud_backup_schedule.prod[0]: Refreshing state... [id=Y2x1c3Rlcl9uYW1l:ZW5nLXByZC1tYXMtZGItbW9uZ28tMjEwLTAx-cHJvamVjdF9pZA==:NWY5YWUwOWY2MjNkMmUyOTQzZWU4ZGQ3]

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # mongodbatlas_cloud_backup_schedule.prod[0] will be updated in-place
  ~ resource "mongodbatlas_cloud_backup_schedule" "prod" ***
        id                                       = "Y2x1c3Rlcl9uYW1l:ZW5nLXByZC1tYXMtZGItbW9uZ28tMjEwLTAx-cHJvamVjdF9pZA==:NWY5YWUwOWY2MjNkMmUyOTQzZWU4ZGQ3"
        # (11 unchanged attributes hidden)

      ~ policy_item_monthly ***
            id                 = "63de2d3b4e7dee150a89f154"
          ~ retention_value    = 60 -> 12
            # (3 unchanged attributes hidden)
        ***

      ~ policy_item_weekly ***
            id                 = "63de2d3b4e7dee150a89f153"
          ~ retention_value    = 1 -> 4
            # (3 unchanged attributes hidden)
        ***

        # (2 unchanged blocks hidden)
    ***

Plan: 0 to add, 1 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────

Saved the plan to: plan.tmp

To perform exactly these actions, run the following command to apply:
    terraform apply "plan.tmp"


### Code of Conduct

- [X] I agree to follow this project's Code of Conduct
github-actions[bot] commented 3 weeks ago

Thanks for opening this issue! Please make sure you've followed our guidelines when opening the issue. In short, to help us reproduce the issue we need:

The ticket CLOUDP-252499 was created for internal tracking.

github-actions[bot] commented 3 weeks ago

This issue has gone 7 days without any activity and meets the project’s definition of "stale". This will be auto-closed if there is no new activity over the next 7 days. If the issue is still relevant and active, you can simply comment with a "bump" to keep it open, or add the label "not_stale". Thanks for keeping our repository healthy!

maastha commented 2 weeks ago

Hi @Kikivsantos, Thank you for creating this issue:)

I was not able to reproduce this. I tried with below config:


resource "mongodbatlas_cloud_backup_schedule" "test" {
  for_each     = local.atlas_clusters
  project_id   = mongodbatlas_project.atlas-project.id
  cluster_name = mongodbatlas_advanced_cluster.automated_backup_test_cluster[each.key].name

  reference_hour_of_day    = 3
  reference_minute_of_hour = 45
  restore_window_days      = 4

  policy_item_hourly {
    frequency_interval = 1 #accepted values = 1, 2, 4, 6, 8, 12 -> every n hours
    retention_unit     = "days"
    retention_value    = 1
  }
  policy_item_daily {
    frequency_interval = 1 #accepted values = 1 -> every 1 day
    retention_unit     = "days"
    retention_value    = 2
  }
  policy_item_weekly {
    frequency_interval = 4 # accepted values = 1 to 7 -> every 1=Monday,2=Tuesday,3=Wednesday,4=Thursday,5=Friday,6=Saturday,7=Sunday day of the week
    retention_unit     = "weeks"
    retention_value    = 4
  }
  policy_item_monthly {
    frequency_interval = 5 # accepted values = 1 to 28 -> 1 to 28 every nth day of the month  
    # accepted values = 40 -> every last day of the month
    retention_unit  = "months"
    retention_value = 12
  }
  policy_item_yearly {
    frequency_interval = 1 # accepted values = 1 to 12 -> 1st day of nth month  
    retention_unit     = "years"
    retention_value    = 5
  }

  depends_on = [
    mongodbatlas_advanced_cluster.automated_backup_test_cluster
  ]
}

and I do see the yearly schedule showing up during update plan:


Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # mongodbatlas_cloud_backup_schedule.test["cluster_1"] will be updated in-place
  ~ resource "mongodbatlas_cloud_backup_schedule" "test" {
        id                                       = "Y2x1c3Rlcl9uYW1l:bTEwLWF3cy0xZQ==-cHJvamVjdF9pZA==:NjY3MDE0ZjA5MzU3M2YwOGI0NmM5Y2Nm"
        # (10 unchanged attributes hidden)

      ~ policy_item_monthly {
            id                 = "6670175293573f08b46ca183"
          ~ retention_value    = 60 -> 12
            # (3 unchanged attributes hidden)
        }

      ~ policy_item_weekly {
            id                 = "6670175293573f08b46ca182"
          ~ retention_value    = 1 -> 4
            # (3 unchanged attributes hidden)
        }

      + policy_item_yearly {
          + frequency_interval = 1
          + retention_unit     = "years"
          + retention_value    = 5
        }

        # (2 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

If you are still running into this issue would request you to please share configuration and log files in line with our one-click reproducible issues principle.

Thank you!