Open chkp-oferb opened 2 years ago
I'm currently facing the same issue. It says "forces replacement" for the kubernetes_cluster_id
even thought it didn't change.
~ kubernetes_cluster_id = "/subscriptions/33aaa451-2be4-4e1d-b677-29de9102e582/resourceGroups/kubernetes-dev/providers/Microsoft.ContainerService/managedClusters/REDACTED" -> "/subscriptions/33aaa451-2be4-4e1d-b677-29de9102e582/resourcegroups/kubernetes-dev/providers/Microsoft.ContainerService/managedClusters/REDACTED" # forces replacement
+1 on the same,
no changes done, still says force replacement
everytime i run terraform plan
or terraform apply
found a solution,
not sure whether its kept that way for a purpose,
adding vnet_subnet_id
resolves this, i.e. its not replacing the nodepool
@oferbd9 Thank you for opening this issue. Was @ssrahul96's solution able to resolve your issue?
@oferbd9 Thank you for opening this issue. Was @ssrahul96's solution able to resolve your issue?
thank you very much @ssrahul96 , @rcskosir it did work in my case
reading the documentation,
the vnet_subnet_id could be really set as mandatory parameter?
In my case
resource "azurerm_kubernetes_cluster_node_pool" "extra-node-pool-1" {
name = "extrapool1"
kubernetes_cluster_id = azurerm_kubernetes_cluster.aks-cluster.id
# kubernetes_cluster_id = replace(azurerm_kubernetes_cluster.aks-cluster.id, "resourceGroups", "resourcegroups")
vm_size = "Standard_D2_v2"
node_count = 2
max_pods = 250
depends_on = [azurerm_kubernetes_cluster.aks-cluster]
vnet_subnet_id = azurerm_kubernetes_cluster.aks-cluster.default_node_pool[0].vnet_subnet_id
# check out https://stackoverflow.com/questions/67825862/terraform-forces-aks-node-pool-replacement-without-any-changes
# lifecycle {
# ignore_changes = [
# kubernetes_cluster_id
# ]
# }
}
Is there an existing issue for this?
Community Note
Terraform Version
1.2.2
AzureRM Provider Version
3.8.0
Affected Resource(s)/Data Source(s)
azurerm_kubernetes_cluster_node_pool
Terraform Configuration Files
Debug Output/Panic Output
Expected Behaviour
With no change to the terraform manifest or the app node pool (user mode) - the resource would not effected by replace or delete
Actual Behaviour
after newly deployed manifest with 6 clusters in 6 regions. running again terraform plan, without any changes, will delete / replace all the app node pools.
Steps to Reproduce
Create AKS cluster add apppool node user mode 3 nodes add Application getaway / AKS ingress configure AGIC Apply
run again terraform plan
Important Factoids
No response
References
no