Azure / terraform-azurerm-aks

Terraform Module for deploying an AKS cluster
MIT License
328 stars 446 forks source link

If you add "kubelet_config" to the "azurerm_kubernetes_cluster_node_pool" entry, you will try to replace the node. #563

Open yuslee80 opened 3 weeks ago

yuslee80 commented 3 weeks ago

Is there an existing issue for this?

Description

If you add "kubelet_config" to the "azurerm_kubernetes_cluster_node_pool" entry, you will try to replace the node.

resource "azurerm_kubernetes_cluster_node_pool" "usernodepool" {
  for_each = var.usernodepoo_vm

  name                  = each.value.user_agents_name
  kubernetes_cluster_id = azurerm_kubernetes_cluster.aks.id
  vm_size               = each.value.user_agents_size
  os_disk_size_gb       = each.value.user_agents_os_disk_size
  node_count            = each.value.user_agents_count
  vnet_subnet_id        = data.azurerm_subnet.subnet.id
  zones                 = [1, 2, 3]
  mode                  = "User"
  kubelet_disk_type     = "OS"
  os_sku                = "Ubuntu"
  os_disk_type          = "Managed"
  ultra_ssd_enabled     = "false"
  max_pods              = each.value.max_pods
  orchestrator_version  = each.value.orchestrator_version
  node_labels           = each.value.node_labels
  kubelet_config {
      container_log_max_line = each.value.container_log_max_line
      container_log_max_size_mb = each.value.container_log_max_size_mb
        }
  upgrade_settings {
      max_surge = each.value.max_surge
    }
}

If I add the code as above and do the terraform plan, I would like to replace the node as below.

  # azurerm_kubernetes_cluster_node_pool.usernodepool["vm3"] must be replaced
-/+ resource "azurerm_kubernetes_cluster_node_pool" "usernodepool" {
      - custom_ca_trust_enabled       = false -> null
      - enable_auto_scaling           = false -> null
      - enable_host_encryption        = false -> null
      - enable_node_public_ip         = false -> null
      - fips_enabled                  = false -> null
      ~ id                            = "/subscriptions/ca8f37d8-6d59-4082-b174-33b4c47334a4/resourceGroups/rg-d-search-aks02-aks/providers/Microsoft.ContainerService/managedClusters/aks-d-sch01/agentPools/upool03" -> (known after apply)
      - max_count                     = 0 -> null
      - min_count                     = 0 -> null
        name                          = "upool03"
      - node_taints                   = [] -> null
      - tags                          = {} -> null
        # (25 unchanged attributes hidden)

      + kubelet_config { # forces replacement
          + container_log_max_line    = 10 # forces replacement
          + container_log_max_size_mb = 1 # forces replacement
          + cpu_cfs_quota_enabled     = false # forces replacement
        }

      ~ upgrade_settings {
          - drain_timeout_in_minutes      = 0 -> null
          - node_soak_duration_in_minutes = 0 -> null
            # (1 unchanged attribute hidden)
        }
    }

However, in "default_node_pool", it works as "change".

New or Affected Resource(s)/Data Source(s)

azurerm_v3.107.0

Potential Terraform Configuration

It should act as "change" instead of "force replace".

References

No response

zioproto commented 3 weeks ago

@yuslee80 thanks for opening this issue

I see you are using the azurerm_kubernetes_cluster_node_pool resource directly. Can you confirm you are not using the module from this repository to create the node pool with the variable var.node_pools ?

The correct place to open this bug is at the Hashicorp Terraform provider azurerm repository: https://github.com/hashicorp/terraform-provider-azurerm/issues