Closed cah6 closed 1 month ago
Thanks for opening this issue! Please make sure you've followed our guidelines when opening the issue. In short, to help us reproduce the issue we need:
The ticket CLOUDP-271592 was created for internal tracking.
Hmm. So we re-applied the same reported drift, which was:
# mongodbatlas_advanced_cluster.cluster0 will be updated in-place
~ resource "mongodbatlas_advanced_cluster" "cluster0" {
id = "elided"
name = "Cluster0"
# (18 unchanged attributes hidden)
~ replication_specs {
id = "elided"
~ num_shards = 2 -> 1
# (4 unchanged attributes hidden)
# (1 unchanged block hidden)
}
+ replication_specs {
+ container_id = (known after apply)
+ num_shards = 1
+ zone_name = "Zone 1"
+ region_configs {
+ priority = 7
+ provider_name = "GCP"
+ region_name = "CENTRAL_US"
+ analytics_auto_scaling (known after apply)
+ analytics_specs (known after apply)
+ auto_scaling {
+ compute_enabled = false
+ compute_scale_down_enabled = false
+ disk_gb_enabled = true
}
+ electable_specs {
+ instance_size = "M30"
+ node_count = 3
}
+ read_only_specs (known after apply)
}
}
# (2 unchanged blocks hidden)
}
and this time it seems like it there's no terraform plan
drift. Maybe the cluster API got fixed to properly apply the change?
I'll close this but will re-open if it somehow drifts back to thinking tf state and cluster state is different.
Thanks @cah6 for opening the issue. I have tried to reproduce this without success. In any case, please reopen it or open a new issue in case this happens again. Thanks again!
Is there an existing issue for this?
Provider Version
1.18.1
Terraform Version
v1.9.5
Terraform Edition
Terraform Open Source (OSS)
Current Behavior
I followed the migration guide to upgrade to 1.18.1 https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema#migrate-advanced_cluster-type-sharded on a sharded cluster, namely I removed
num_shards = 2
and instead repeated thereplication_specs
blocks. This signals to terraform that the cluster needs to be updated (expected) and when the change is applied, the Atlas UI reports in the activity tabNo changes to the cluster were detected in the update submitted through the public API.
(expected). However even after the apply, terraform still thinks there's drift; it seems that the API is returning info in the old format (areplicationSpecList
withnumShards = 2
) so there's no way to get to a clean state.Terraform configuration to reproduce the issue
Steps To Reproduce
Configuration above is what our cluster is, but you may need to first create the cluster with the old syntax, then change to the new syntax. That is, create cluster with:
then change the config to what is in the
Terraform configuration to reproduce the issue
section.Although -- I'm not sure if this will reliably reproduce the issue, since we made the same change on another cluster and it doesn't produce the same drift. Besides cluster tier, the primary difference in the other cluster is that its a
GEOSHARDED
cluster, which may be related.Logs
No response
Code of Conduct