mongodb / terraform-provider-mongodbatlas

Terraform MongoDB Atlas Provider: Deploy, update, and manage MongoDB Atlas infrastructure as code through HashiCorp Terraform
https://registry.terraform.io/providers/mongodb/mongodbatlas
Mozilla Public License 2.0
230 stars 167 forks source link

[Bug]: mongodbatlas_cloud_backup_schedule - yearly snapshot not working #2359

Closed Kikivsantos closed 6 days ago

Kikivsantos commented 1 week ago

Is there an existing issue for this?

Provider Version

latest - v1.17.1

Terraform Version

latest

Terraform Edition

Terraform Open Source (OSS)

Current Behavior

The yearly snashot policy is not being created by terraform (I'm having to create it manually on Atlas)

After creating this manually and running any change on my cluster resource it shows removing the yearly snapshot (even when my configuration still has the yearly snapshot on it). As below:

Terraform configuration to reproduce the issue

## main.tf:

locals {
    provider_name                                           = var.provider_name == null ? "GCP" : var.provider_name
    provider_region                                         = var.provider_region == null ? "CENTRAL_US" : var.provider_region
    zone_name                                               = var.zone_name == null ? "${local.provider_name}-${local.provider_region}" : var.zone_name
    environment_name                                        = "${terraform.workspace}"
    project_name                                            = "${terraform.workspace}" == "master" ? "production" : "${terraform.workspace}"

    required_tags                                           = {
        env                                                 = local.environment_name
        infrastructure_owner                                = "DBRE"
    }

    tags                                                  = merge(local.required_tags, var.resource_tags)

    env                                                     = {
        master                                              = "prd",
        staging                                             = "stg",
        develop                                             = "dev"
    }

    env_type                                                = {
        master                                              = "prod",
        staging                                             = "nonprod",
        develop                                             = "nonprod"
    }
}

# ------------------------------------------------------------------------------
# MONGODB CLUSTER
# ------------------------------------------------------------------------------
resource "mongodbatlas_advanced_cluster" "default" {
    project_id                                              = data.mongodbatlas_project.default.id
    name                                                    = var.old_cluster_bool ? var.old_cluster_name[local.env[terraform.workspace]].value : "mgo-${local.env[terraform.workspace]}-${var.cluster_name}" 
    cluster_type                                            = var.cluster_type
    backup_enabled                                          = var.backup_enabled
    pit_enabled                                             = "${terraform.workspace}" == "master" ? true : false
    mongo_db_major_version                                  = var.mongo_db_major_version[local.env[terraform.workspace]].value
    disk_size_gb                                            = var.disk_size_gb[local.env[terraform.workspace]].value

    advanced_configuration {
        fail_index_key_too_long                             = var.fail_index_key_too_long
        javascript_enabled                                  = var.javascript_enabled
        minimum_enabled_tls_protocol                        = var.minimum_enabled_tls_protocol
        #no_table_scan                                       = terraform.workspace == "master" ? false : true
        no_table_scan                                       = var.no_table_scan
    }

    replication_specs {
        num_shards                                          = var.cluster_type == "REPLICASET" ? null : var.num_shards[local.env[terraform.workspace]].value

        dynamic "region_configs" {
            for_each                                        = var.regions_config[local.env[terraform.workspace]]
            content{
                electable_specs {
                    instance_size                           = region_configs.value.electable_specs.instance_size
                    node_count                              = region_configs.value.electable_specs.node_count
                }
                auto_scaling {
                    disk_gb_enabled                         = region_configs.value.auto_scaling.disk_gb_enabled
                    compute_enabled                         = region_configs.value.auto_scaling.compute_enabled
                    compute_scale_down_enabled              = region_configs.value.auto_scaling.compute_scale_down_enabled
                    compute_min_instance_size               = region_configs.value.auto_scaling.compute_min_instance_size
                    compute_max_instance_size               = region_configs.value.auto_scaling.compute_max_instance_size
                }
                provider_name                               = region_configs.value.provider_name
                priority                                    = region_configs.value.region_priority
                region_name                                 = region_configs.value.region_name
            }
        }
    }

    dynamic "tags" {
        for_each                                            = local.tags
        content {
            key                                             = tags.key
            value                                           = tags.value
        }
    }

    bi_connector_config {
        enabled                                             = var.bi_connector_enabled
        read_preference                                     = var.bi_connector_read_preference
    }

    lifecycle {
        ignore_changes                                      = [
            replication_specs[0].region_configs[0].electable_specs[0].instance_size,
            paused
        ]
    }
}

# ------------------------------------------------------------------------------
# MONGODB BACKUP SCHEDULE PROD
# ------------------------------------------------------------------------------
resource "mongodbatlas_cloud_backup_schedule" "prod" {
    count                                                   = var.backup_enabled && terraform.workspace == "master" ? 1 : 0
    project_id                                              = data.mongodbatlas_project.default.id
    cluster_name                                            = mongodbatlas_advanced_cluster.default.name 

    reference_hour_of_day                                   = var.reference_hour_of_day
    reference_minute_of_hour                                = var.reference_minute_of_hour
    restore_window_days                                     = var.restore_window_days
    update_snapshots                                        = false

    dynamic "policy_item_hourly" {
        for_each = var.policy_item_hourly[local.env_type[terraform.workspace]].frequency_interval == null ? toset([]) : toset([1])

        content {
            frequency_interval  = try(var.policy_item_hourly[local.env_type[terraform.workspace]].frequency_interval, null)
            retention_unit      = try(var.policy_item_hourly[local.env_type[terraform.workspace]].retention_unit, null)
            retention_value     = try(var.policy_item_hourly[local.env_type[terraform.workspace]].retention_value, null)
        }

    }

    dynamic "policy_item_daily" {
        for_each = var.policy_item_daily[local.env_type[terraform.workspace]].frequency_interval == null ? toset([]) : toset([1])

        content {
            frequency_interval  = try(var.policy_item_daily[local.env_type[terraform.workspace]].frequency_interval, null)
            retention_unit      = try(var.policy_item_daily[local.env_type[terraform.workspace]].retention_unit, null)
            retention_value     = try(var.policy_item_daily[local.env_type[terraform.workspace]].retention_value, null)
        }

    }

    dynamic "policy_item_weekly" {
        for_each = var.policy_item_weekly[local.env_type[terraform.workspace]].frequency_interval == null ? toset([]) : toset([1])

        content {
            frequency_interval  = try(var.policy_item_weekly[local.env_type[terraform.workspace]].frequency_interval, null)
            retention_unit      = try(var.policy_item_weekly[local.env_type[terraform.workspace]].retention_unit, null)
            retention_value     = try(var.policy_item_weekly[local.env_type[terraform.workspace]].retention_value, null)
        }

    }

    dynamic "policy_item_monthly" {
        for_each = var.policy_item_monthly[local.env_type[terraform.workspace]].frequency_interval == null ? toset([]) : toset([1])

        content {
            frequency_interval  = try(var.policy_item_monthly[local.env_type[terraform.workspace]].frequency_interval, null)
            retention_unit      = try(var.policy_item_monthly[local.env_type[terraform.workspace]].retention_unit, null)
            retention_value     = try(var.policy_item_monthly[local.env_type[terraform.workspace]].retention_value, null)
        }

    }

    dynamic "policy_item_yearly" {
        for_each = var.policy_item_yearly[local.env_type[terraform.workspace]].frequency_interval == null ? toset([]) : toset([1])

        content {
            frequency_interval  = try(var.policy_item_yearly[local.env_type[terraform.workspace]].frequency_interval, null)
            retention_unit      = try(var.policy_item_yearly[local.env_type[terraform.workspace]].retention_unit, null)
            retention_value     = try(var.policy_item_yearly[local.env_type[terraform.workspace]].retention_value, null)
        }

    }

    depends_on = [mongodbatlas_advanced_cluster.default]
}

# ------------------------------------------------------------------------------
# MONGODB BACKUP SCHEDULE NONPROD
# ------------------------------------------------------------------------------
resource "mongodbatlas_cloud_backup_schedule" "nonprod" {

    count                                                   = var.backup_enabled && terraform.workspace != "master" ? 1 : 0
    project_id                                              = data.mongodbatlas_project.default.id
    cluster_name                                            = mongodbatlas_advanced_cluster.default.name 

    reference_hour_of_day                                   = 21
    reference_minute_of_hour                                = 0
    restore_window_days                                     = 1
    update_snapshots                                        = false

    dynamic "policy_item_daily" {
        for_each = var.policy_item_daily[local.env_type[terraform.workspace]].frequency_interval == null ? toset([]) : toset([1])

        content {
            frequency_interval  = try(var.policy_item_daily[local.env_type[terraform.workspace]].frequency_interval, null)
            retention_unit      = try(var.policy_item_daily[local.env_type[terraform.workspace]].retention_unit, null)
            retention_value     = try(var.policy_item_daily[local.env_type[terraform.workspace]].retention_value, null)
        }

    }

    depends_on                                              = [mongodbatlas_advanced_cluster.default]
}

var.tf:

variable "provider_name" {
    description = <<HEREDOC
    Optional - Cloud service provider on which the servers are provisioned. The possible 
    values are: AWS for Amazon, GCP for Google Cloud and, AZURE for Microsoft Azure.
    HEREDOC
    default     = null
}

variable "provider_region" {
    description = <<HEREDOC
    Optional - Physical location of your MongoDB cluster. The region you choose can affect
    network latency for clients accessing your databases. Requires the Atlas region name, 
    see the reference list for AWS, GCP, Azure. Do not specify this field when creating a 
    multi-region cluster using the replicationSpec document or a Global Cluster with the 
    replicationSpecs array.
    HEREDOC
    default     = null
}

variable "cluster_name" {
    description = <<HEREDOC
    Optional - Name of the cluster as it appears in Atlas. Once the cluster is created, its
    name cannot be changed.
    HEREDOC
    type        = string
}

variable "old_cluster_name" {
    description = <<HEREDOC
    Optional - Name of the cluster as it appears in Atlas. Once the cluster is created, its
    name cannot be changed.
    HEREDOC
    type        = map(object({
        value = string
    }))
    default = {
        prd = {
            value = "PROD"
        }
        stg = {
            value = "STG"
        }
        dev = {
            value = "DEV"
        }
    }
}

variable "old_cluster_bool" {
    description = <<HEREDOC
    Optional - Specifies if the cluster is an old cluster.
    HEREDOC
    type        = bool
    default     = true
}

variable "cluster_type" {
    description = <<HEREDOC
    Optional - Specifies the type of the cluster that you want to modify. You cannot convert
    a sharded cluster deployment to a replica set deployment. Accepted values include:
    REPLICASET for Replica set, SHARDED for Sharded cluster, and GEOSHARDED for Global Cluster
    HEREDOC
    default     = "REPLICASET"
}

variable "mongo_db_major_version" {
    description = <<HEREDOC
    Optional - Capacity, in gigabytes, of the host’s root volume. Increase this
    number to add capacity, up to a maximum possible value of 4096 (i.e., 4 TB). This value must
    be a positive integer. If you specify diskSizeGB with a lower disk size, Atlas defaults to
    the minimum disk size value. Note: The maximum value for disk storage cannot exceed 50 times
    the maximum RAM for the selected cluster. If you require additional storage space beyond this
    limitation, consider upgrading your cluster to a higher tier.
    HEREDOC
        type        = map(object({
        value = string
    }))
    default = {
        prd = {
            value = "7.0"
        }
        stg = {
            value = "7.0"
        }
        dev = {
            value = "7.0"
        }
    }
}

variable "version_release_system" {
    description = <<HEREDOC
    Optional - Release cadence that Atlas uses for this cluster. This parameter defaults to LTS. 
    If you set this field to CONTINUOUS, you must omit the mongo_db_major_version field. Atlas accepts:
    CONTINUOUS - Atlas deploys the latest version of MongoDB available for the cluster tier.
    LTS - Atlas deploys the latest Long Term Support (LTS) version of MongoDB available for the cluster tier.
    HEREDOC
    default     = "LTS"
}

variable "num_shards" {
    description = <<HEREDOC
    Optional - Number of shards, minimum 1.
    The default is 1.    
    HEREDOC
    type        = map(object({
        value = number
    }))
    default = {
        prd = {
            value = 1
        }
        stg = {
            value = 1
        }
        dev = {
            value = 1
        }
    }
}

variable "instance_size" {
    description = <<HEREDOC
    Optional - Atlas provides different instance sizes, each with a default storage capacity and RAM size.
    The instance size you select is used for all the data-bearing servers in your cluster.
    HEREDOC
    default = "M30"
}

variable "compute_enabled" {
    description = <<HEREDOC
    Optional - Specifies whether cluster tier auto-scaling is enabled. The default is true.
    IMPORTANT: If compute_enabled is true, then Atlas will 
    automatically scale up to the maximum provided and down to the minimum, if provided.
    HEREDOC
    type        = bool
    default     = true
}

variable "compute_scale_down_enabled" {
    description = <<HEREDOC
    Optional - Set to true to enable the cluster tier to scale down. This option is only available
    if compute_enabled is true. The default is true.
    HEREDOC
    type        = bool
    default     = true
}

variable "compute_min_instance_size" {
    description = <<HEREDOC
    Optional - Minimum instance size to which your cluster can automatically scale (e.g., M10).
    The default is "M30".
    HEREDOC
    default     = "M30"
}

variable "compute_max_instance_size" {
    description = <<HEREDOC
    Optional - Maximum instance size to which your cluster can automatically scale (e.g., M40).
    The default is "M80".
    HEREDOC
    default     = "M80"
}

variable "disk_size_gb" {
    description = <<HEREDOC
    Optional - Capacity, in gigabytes, of the host’s root volume. Increase this
    number to add capacity, up to a maximum possible value of 4096 (i.e., 4 TB). This value must
    be a positive integer. If you specify diskSizeGB with a lower disk size, Atlas defaults to
    the minimum disk size value. Note: The maximum value for disk storage cannot exceed 50 times
    the maximum RAM for the selected cluster. If you require additional storage space beyond this
    limitation, consider upgrading your cluster to a higher tier.
    HEREDOC
        type        = map(object({
        value = number
    }))
    default = {
        prd = {
            value = 10
        }
        stg = {
            value = 10
        }
        dev = {
            value = 10
        }
    }
}

variable "backup_enabled" {
    description = <<HEREDOC
    Optional - Flag indicating if the cluster uses Cloud Backup for backups. If true, the cluster
    uses Cloud Backup for backups. The default is true.
    HEREDOC
    type        = bool
    default     = true
}

variable "pit_enabled" {
    description = <<HEREDOC
    Optional - Flag that indicates if the cluster uses Continuous Cloud Backup. If set to true,
    backup_enabled must also be set to true. The default is true.
    HEREDOC
    type        = bool
    default     = true
}

variable "disk_gb_enabled" {
    description = <<HEREDOC
    Optional - Specifies whether disk auto-scaling is enabled. The default is true.
    HEREDOC
    type        = bool
    default     = true
}

variable "fail_index_key_too_long" {
    description = <<HEREDOC
    Optional - When true, documents can only be updated or inserted if, for all indexed fields
    on the target collection, the corresponding index entries do not exceed 1024 bytes.
    When false, mongod writes documents that exceed the limit but does not index them.
    HEREDOC
    type        = bool
    default     = false
}

variable "javascript_enabled" {
    description = <<HEREDOC
    Optional - When true, the cluster allows execution of operations that perform server-side
    executions of JavaScript. When false, the cluster disables execution of those operations.
    HEREDOC
    type        = bool
    default     = true
}

variable "minimum_enabled_tls_protocol" {
    description = <<HEREDOC
    Optional - Sets the minimum Transport Layer Security (TLS) version the cluster accepts for
    incoming connections. Valid values are: TLS1_0, TLS1_1, TLS1_2. The default is "TLS1_2".
    HEREDOC
    default     = "TLS1_2"
}

variable "no_table_scan" {
    description = <<HEREDOC
    Optional - When true, the cluster disables the execution of any query that requires a collection
    scan to return results. When false, the cluster allows the execution of those operations.
    HEREDOC
    type        = bool
    default     = false
}

variable "zone_name" {
    description = <<HEREDOC
    Optional - Name for the zone in a Global Cluster.
    HEREDOC
    default     = null
}

variable "regions_config" {
    description = <<HEREDOC
    Required - Physical location of the region. Each regionsConfig document describes
    the region’s priority in elections and the number and type of MongoDB nodes Atlas
    deploys to the region. You can be set that parameters:

    - region_name - Optional - Physical location of your MongoDB cluster. The region
    you choose can affect network latency for clients accessing your databases.

    - electable_nodes - Optional - Number of electable nodes for Atlas to deploy to the
    region. Electable nodes can become the primary and can facilitate local reads.
    The total number of electableNodes across all replication spec regions must total 3,
    5, or 7. Specify 0 if you do not want any electable nodes in the region. You cannot
    create electable nodes in a region if priority is 0.

    - priority - Optional - Election priority of the region. For regions with only read-only
    nodes, set this value to 0. For regions where electable_nodes is at least 1, each region
    must have a priority of exactly one (1) less than the previous region. The first region 
    must have a priority of 7. The lowest possible priority is 1. The priority 7 region 
    identifies the Preferred Region of the cluster. Atlas places the primary node in the 
    Preferred Region. Priorities 1 through 7 are exclusive - no more than one region per 
    cluster can be assigned a given priority. Example: If you have three regions, their 
    priorities would be 7, 6, and 5 respectively. If you added two more regions for supporting
    electable nodes, the priorities of those regions would be 4 and 3 respectively.

    - read_only_nodes - Optional - Number of read-only nodes for Atlas to deploy to the region.
    Read-only nodes can never become the primary, but can facilitate local-reads. Specify 0 if
    you do not want any read-only nodes in the region.

    - analytics_nodes - Optional - The number of analytics nodes for Atlas to deploy to the region.
    Analytics nodes are useful for handling analytic data such as reporting queries from BI 
    Connector for Atlas. Analytics nodes are read-only, and can never become the primary. If you do
    not specify this option, no analytics nodes are deployed to the region.
    HEREDOC
    type        = map(list(any))
}

variable "resource_tags" {
    description = <<HEREDOC
    Optional - Key-value pairs that tag and categorize the cluster. Each key and value has a
    maximum length of 255 characters. You cannot set the key Infrastructure Tool, it is used
    for internal purposes to track aggregate usage.
    HEREDOC
    type        = map(string)
    default = {}
}

variable "bi_connector_enabled" {
    description = <<HEREDOC
    Optional - Specifies whether or not BI Connector for Atlas is enabled on the cluster.
    Set to true to enable BI Connector for Atlas. Set to false to disable BI Connector for Atlas.
    HEREDOC
    type        = bool
    default     = false
}
variable "bi_connector_read_preference" {
    description = <<HEREDOC
    Optional - Specifies the read preference to be used by BI Connector for Atlas on the cluster.
    Each BI Connector for Atlas read preference contains a distinct combination of readPreference
    and readPreferenceTags options. For details on BI Connector for Atlas read preferences, refer
    to the BI Connector Read Preferences Table.
    Set to "primary" to have BI Connector for Atlas read from the primary. Set to "secondary" to 
    have BI Connector for Atlas read from a secondary member. Default if there are no analytics 
    nodes in the cluster. Set to "analytics" to have BI Connector for Atlas read from an analytics
    node. Default if the cluster contains analytics nodes.
    HEREDOC
    type        = string
    default     = "secondary"
}

# ------------------------------------------------------------------------------
# MONGODB BACKUP SCHEDULE
# ------------------------------------------------------------------------------
variable "policy_item_hourly" {
    description = <<HEREDOC
    Optional - Specifies the backup policy for the cluster. Each backup policy contains a
    distinct combination of frequency, pointInTimeWindowHours, and retentionDays options.
    For details on backup policies, refer to the Backup Policies Table.
    HEREDOC
    type        = map(object({
        frequency_interval = optional(number)
        retention_unit     = optional(string)
        retention_value    = optional(number)
    }))
    default = {
        prod = {
            frequency_interval          = 1
            retention_unit              = "days"
            retention_value             = 1
        }
        nonprod = {
            frequency_interval          = 1
            retention_unit              = "days"
            retention_value             = 1
        }
    }
}

variable "policy_item_daily" {
    description = <<HEREDOC
    Optional - Specifies the backup policy for the cluster. Each backup policy contains a
    distinct combination of frequency, pointInTimeWindowHours, and retentionDays options.
    For details on backup policies, refer to the Backup Policies Table.
    HEREDOC
    type        = map(object({
        frequency_interval = optional(number)
        retention_unit     = optional(string)
        retention_value    = optional(number)
    }))
    default = {
        prod = {
            frequency_interval          = 1
            retention_unit              = "days"
            retention_value             = 7
        }
        nonprod = {
            frequency_interval          = 1
            retention_unit              = "days"
            retention_value             = 5
        }
    }
}

variable "policy_item_weekly" {
    description = <<HEREDOC
    Optional - Specifies the backup policy for the cluster. Each backup policy contains a
    distinct combination of frequency, pointInTimeWindowHours, and retentionDays options.
    For details on backup policies, refer to the Backup Policies Table.
    HEREDOC
    type        = map(object({
        frequency_interval = optional(number)
        retention_unit     = optional(string)
        retention_value    = optional(number)
    }))
    default = {
        prod = {
            frequency_interval          = 6
            retention_unit              = "weeks"
            retention_value             = 4
        }
        nonprod = {
            frequency_interval          = 1
            retention_unit              = "weeks"
            retention_value             = 1
        }
    }
}

variable "policy_item_monthly" {
    description = <<HEREDOC
    Optional - Specifies the backup policy for the cluster. Each backup policy contains a
    distinct combination of frequency, pointInTimeWindowHours, and retentionDays options.
    For details on backup policies, refer to the Backup Policies Table.
    HEREDOC
    type        = map(object({
        frequency_interval = optional(number)
        retention_unit     = optional(string)
        retention_value    = optional(number)
    }))
    default = {
        prod = {
            frequency_interval          = 40
            retention_unit              = "months"
            retention_value             = 3
        }
        nonprod = {
            frequency_interval          = 1
            retention_unit              = "months"
            retention_value             = 1
        }
    }
}

variable "policy_item_yearly" {
    description = <<HEREDOC
    Optional - Specifies the backup policy for the cluster. Each backup policy contains a
    distinct combination of frequency, pointInTimeWindowHours, and retentionDays options.
    For details on backup policies, refer to the Backup Policies Table.
    HEREDOC
    type        = map(object({
        frequency_interval = optional(number)
        retention_unit     = optional(string)
        retention_value    = optional(number)
    }))
    default = {
        prod = {
            frequency_interval          = 1
            retention_unit              = "years"
            retention_value             = 5
        }
        nonprod = {
            frequency_interval          = 1
            retention_unit              = "years"
            retention_value             = 1
        }
    }
}
variable "reference_hour_of_day" {
    description = <<HEREDOC
    UTC Hour of day between 0 and 23, inclusive, 
    representing which hour of the day that Atlas takes snapshots for backup policy items.
    HEREDOC
    type        = number
    default     = 1
}

variable "reference_minute_of_hour" {
    description = <<HEREDOC
    UTC Minutes after reference_hour_of_day that Atlas takes snapshots for backup policy items. 
    Must be between 0 and 59, inclusive.
    HEREDOC
    type        = number
    default     = 0
}

variable "restore_window_days" {
    description = <<HEREDOC
    Number of days back in time you can restore to with point-in-time accuracy. 
    Must be a positive, non-zero integer
    HEREDOC
    type        = number
    default     = 1
}

resource (cluster.hcl) -> using terragrunt

include {
  path                                = find_in_parent_folders()
}

locals {
  component_name = "modules/cluster"
  component_version = "v1.5.0"
}

inputs                                = {
  no_table_scan                       = false
  cluster_name                        = "rec"
  old_cluster_bool                    = false
  cluster_type                        = "REPLICASET"
  disk_size_gb                        = { prd = { value = 107 }, stg = { value = 30 }, dev = { value = 10 } }
  mongo_db_major_version              = { prd = { value = "7.0" }, stg = { value = "7.0" }, dev = { value = "7.0" } }

  regions_config                      = {
    prd                               = [
      {
        region_name                   = "SOUTH_AMERICA_EAST_1"
        provider_name                 = "GCP"
        region_priority               = 7

        electable_specs               = {
          instance_size               = "M10"
          node_count                  = 3
        }

        auto_scaling                  = {
          compute_min_instance_size   = "M10"
          compute_max_instance_size   = "M30"
          disk_gb_enabled             = true
          compute_enabled             = true
          compute_scale_down_enabled  = true
        }
      }
    ]
    stg                               = [
      {
        region_name                   = "CENTRAL_US"
        provider_name                 = "GCP"
        region_priority               = 7

        electable_specs               = {
          instance_size               = "M10"
          node_count                  = 3
        }

        auto_scaling                  = {
          compute_min_instance_size   = "M10"
          compute_max_instance_size   = "M30"
          disk_gb_enabled             = true
          compute_enabled             = true
          compute_scale_down_enabled  = true
        }
      }
    ]
    dev                               = [
      {
        region_name                   = "CENTRAL_US"
        provider_name                 = "GCP"
        region_priority               = 7

        electable_specs               = {
          instance_size               = "M10"
          node_count                  = 3
        }

        auto_scaling                  = {
          compute_min_instance_size   = null
          compute_max_instance_size   = null
          disk_gb_enabled             = true
          compute_enabled             = false
          compute_scale_down_enabled  = false
        }
      }
    ]
  }

  reference_hour_of_day               = 01
  reference_minute_of_hour            = 20
  restore_window_days                 = 1

  policy_item_hourly                  = {
    prod                              = {
        frequency_interval            = 1
        retention_unit                = "days"
        retention_value               = 1
    }
  }

  policy_item_daily                   = {
    prod                              = {
        frequency_interval            = 1
        retention_unit                = "days"
        retention_value               = 7
    }
    nonprod                           = {
        frequency_interval            = 1
        retention_unit                = "days"
        retention_value               = 5
    }
  }

  policy_item_weekly                  = {
    prod                              = {
        frequency_interval            = 1
        retention_unit                = "weeks"
        retention_value               = 4
    }
  }

  policy_item_monthly                 = {
    prod                              = {
        frequency_interval            = 40
        retention_unit                = "months"
        retention_value               = 12
    }
  }

  policy_item_yearly                  = {
    prod                              = {
        frequency_interval            = 1
        retention_unit                = "years"
        retention_value               = 5
    }
  }

  resource_tags                       = {
    service-owner                     = "Data-Driven"
    service-name                      = "reconciliation"
  }
} 

### Steps To Reproduce

terragrunt init
terragrunt plan
terragrunt apply

### Logs

```txt
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # mongodbatlas_cloud_backup_schedule.prod[0] will be updated in-place
  ~ resource "mongodbatlas_cloud_backup_schedule" "prod" {
        id                                       = "Y2x1c3Rlcl9uYW1l:bWdvLXByZC1yZWM=-cHJvamVjdF9pZA==:NWY5YWUwOWY2MjNkMmUyOTQzZWU4ZGQ3"
        # (11 unchanged attributes hidden)

      - policy_item_yearly {
          - frequency_interval = 1 -> null
          - frequency_type     = "yearly" -> null
          - id                 = "66619ba91aef75af752e60a5" -> null
          - retention_unit     = "years" -> null
          - retention_value    = 5 -> null
        }

        # (4 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.


### Code of Conduct

- [X] I agree to follow this project's Code of Conduct
github-actions[bot] commented 1 week ago

Thanks for opening this issue! Please make sure you've followed our guidelines when opening the issue. In short, to help us reproduce the issue we need:

The ticket CLOUDP-256809 was created for internal tracking.

github-actions[bot] commented 1 week ago

This issue has gone 7 days without any activity and meets the project’s definition of "stale". This will be auto-closed if there is no new activity over the next 7 days. If the issue is still relevant and active, you can simply comment with a "bump" to keep it open, or add the label "not_stale". Thanks for keeping our repository healthy!

lantoli commented 6 days ago

thanks @Kikivsantos for opening the issue.

I see that you also opened a support ticket. I'm going to close this issue so we can centralise the resolution of the issue, we'll keep you posted there.

thanks again