mongodb / terraform-provider-mongodbatlas

Terraform MongoDB Atlas Provider: Deploy, update, and manage MongoDB Atlas infrastructure as code through HashiCorp Terraform
https://registry.terraform.io/providers/mongodb/mongodbatlas
Mozilla Public License 2.0
230 stars 167 forks source link

[Bug]: <Root object was present, but now absent.> #2345

Closed priyanshur-curefit closed 3 days ago

priyanshur-curefit commented 2 weeks ago

Is there an existing issue for this?

Provider Version

v1.17.0

Terraform Version

v1.8.4

Terraform Edition

Terraform Open Source (OSS)

Current Behavior

I am getting the following error while running mongodbatlas_cloud_backup_snapshot_restore_job:

│ 
│ When applying changes to
│ mongodbatlas_cloud_backup_snapshot_restore_job.restore_job["prod-digital-hyd"], provider
│ "provider[\"registry.terraform.io/mongodb/mongodbatlas\"]" produced an unexpected new value: Root
│ object was present, but now absent.
│ 
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.```

### Terraform configuration to reproduce the issue

```hcl
resource "mongodbatlas_cloud_backup_snapshot_restore_job" "restore_job" {
  for_each = var.clusters

  project_id          = each.value.project_id
  cluster_name        = each.key
  snapshot_id         = data.external.fetch_snapshot_id[each.key].result["snapshot_id"]
  depends_on = [
    mongodbatlas_cluster.hyd_mongo_cluster
  ]
  delivery_type_config {
    automated = true
    target_cluster_name = each.key
    target_project_id     = each.value.project_id
  }
}

Steps To Reproduce

  1. Create a cluster using mongodbatlas_cluster
  2. Try to run mongodbatlas_cloud_backup_snapshot_restore_job using the snapshot_id from another cluster

Logs

No response

Code of Conduct

github-actions[bot] commented 2 weeks ago

Thanks for opening this issue! Please make sure you've followed our guidelines when opening the issue. In short, to help us reproduce the issue we need:

The ticket CLOUDP-254199 was created for internal tracking.

github-actions[bot] commented 1 week ago

This issue has gone 7 days without any activity and meets the project’s definition of "stale". This will be auto-closed if there is no new activity over the next 7 days. If the issue is still relevant and active, you can simply comment with a "bump" to keep it open, or add the label "not_stale". Thanks for keeping our repository healthy!

maastha commented 1 week ago

Hi @priyanshur-curefit Thanks a lot for creating this issue:)

Unfortunately I was not able to reproduce this issue. I used below configuration and I was able to successfully deploy everything.

resource "mongodbatlas_cluster" "my_cluster" {
  project_id = mongodbatlas_project.atlas-project.id
  name       = "test"

  provider_name               = "AWS"
  provider_region_name        = "US_WEST_2"
  provider_instance_size_name = "M10"
  cloud_backup                = true // enable cloud provider snapshots
}

resource "mongodbatlas_cloud_backup_snapshot" "test" {
  project_id        = mongodbatlas_cluster.my_cluster.project_id
  cluster_name      = mongodbatlas_cluster.my_cluster.name
  description       = "tmp"
  retention_in_days = 1
}

resource "mongodbatlas_cloud_backup_snapshot_restore_job" "test" {
  project_id   = mongodbatlas_cloud_backup_snapshot.test.project_id
  cluster_name = mongodbatlas_cloud_backup_snapshot.test.cluster_name
  snapshot_id  = mongodbatlas_cloud_backup_snapshot.test.id

  depends_on = [
    mongodbatlas_cluster.my_cluster
  ]
  delivery_type_config {
    automated           = true
    target_cluster_name = mongodbatlas_cluster.my_cluster.name
    target_project_id   = mongodbatlas_project.atlas-project.id
  }
}

If you are still running into this issue, I'd request you to please share complete configuration and ensure following one-click reproducible issues principle.

Thank you!

priyanshur-curefit commented 1 week ago
resource "mongodbatlas_cluster" "hyd_mongo_cluster" {
  for_each = var.clusters

  project_id              = each.value.project_id
  name                    = each.key
  provider_name           = each.value.provider_name  
  cluster_type            = each.value.cluster_type
  cloud_backup            = each.value.cloud_backup

  provider_instance_size_name = each.value.provider_instance_size_name

replication_specs {
  num_shards = 1

  dynamic "regions_config" {
    for_each = each.value.replication_specs
    content {
      region_name     = regions_config.key
      electable_nodes = regions_config.value.electable_nodes
      analytics_nodes = regions_config.value.analytics_nodes
      priority        = regions_config.value.priority
    }
  }
}

  auto_scaling_disk_gb_enabled = each.value.auto_scaling_disk_gb_enabled

  disk_size_gb = each.value.disk_size_gb

  mongo_db_major_version = each.value.mongo_db_major_version

  dynamic "tags" {
    for_each = each.value.tags
    content {
      key   = tags.key
      value = tags.value
    }
  }
}

resource "mongodbatlas_cloud_backup_snapshot_restore_job" "restore_job" {
  for_each = var.clusters

  project_id          = each.value.project_id
  cluster_name        = each.key
  snapshot_id         = data.external.fetch_snapshot_id[each.key].result["snapshot_id"]
  depends_on = [
    mongodbatlas_cluster.hyd_mongo_cluster
  ]
  delivery_type_config {
    automated = true
    target_cluster_name = each.key
    target_project_id     = each.value.project_id
  }
}

data "external" "fetch_snapshot_id" {
  for_each = var.clusters

  program = ["bash", "${path.module}/mongo-snapshots-fetcher.sh", replace(each.key, "-hyd", "")]  
}

here u can see i am trying to restore an already available snapshot of another cluster to a new cluster. I am not using mongodbatlas_cloud_backup_snapshot to create a snapshot. I am using a custom script mongo-snapshots-fetcher.sh which is returning the snaphsot id and i am using that

maastha commented 1 week ago

I see, in that case are you sure snapshot_id has a valid value from your script? Could you please share your state file for the concerned resources and the logs? This could be an issue from the backend but it's hard to say just looking at the config.