Closed weisdd closed 2 years ago
Hey @weisdd,
This was previously raised over in https://github.com/hashicorp/terraform-provider-azurerm/issues/18237 and as explained in the response to your PR #18238 this is by design.
Thanks for taking the time to raise this issue but since this behaviour is by design I am going to mark this as a duplicate and close this issue.
@stephybun alright, thanks for pointing that out!
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Is there an existing issue for this?
Community Note
I'll open a PR myself soon.
Terraform Version
v1.2.8
AzureRM Provider Version
3.21.0
Affected Resource(s)/Data Source(s)
azurerm_kubernetes_cluster_node_pool
Terraform Configuration Files
Debug Output/Panic Output
Expected Behaviour
Once the infrastructure is deployed, the subsequent calls to
terraform apply
/terraform plan
should produce no diffs:No changes. Your infrastructure matches the configuration.
Actual Behaviour
terraform suggests to replace the spot node pool due to changes in
eviction_policy
andnode_taints
:Steps to Reproduce
terraform apply
terraform plan
Important Factoids
No response
References
As you can see from https://docs.microsoft.com/en-gb/azure/aks/spot-node-pool, both
eviction_policy
andnode_taints
are optional and are set toDelete
andkubernetes.azure.com/scalesetpriority=spot:NoSchedule
respectively by Azure if not explicitly provided. Thus, I'd expect azurerm provider to treat those as defaults.