hashicorp / terraform-provider-azurerm

Terraform provider for Azure Resource Manager
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs
Mozilla Public License 2.0
4.53k stars 4.6k forks source link

Cannot reduce AKS auto-scale pool size even when default size not declared #18148

Closed jaohurtas closed 1 month ago

jaohurtas commented 2 years ago

Is there an existing issue for this?

Community Note

Terraform Version

1.2.8

AzureRM Provider Version

3.20.0

Affected Resource(s)/Data Source(s)

azurerm_kubernetes_cluster

Terraform Configuration Files

terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "3.20.0"
    }
  }

  required_version = ">= 1.2.8"
}

...
resource "azurerm_kubernetes_cluster" "aks_cluster" {
  name                      = var.name
  location                  = var.location
  resource_group_name       = var.resource_group_name
  kubernetes_version        = var.kubernetes_version
  dns_prefix                = var.dns_prefix
  private_cluster_enabled   = var.private_cluster_enabled
  automatic_channel_upgrade = var.automatic_channel_upgrade
  sku_tier                  = var.sku_tier

  default_node_pool {

    name                   = var.default_node_pool_name
    vm_size                = var.default_node_pool_vm_size
    vnet_subnet_id         = var.vnet_subnet_id
    #availability_zones     = var.default_node_pool_availability_zones
    node_labels            = var.default_node_pool_node_labels
    node_taints            = var.default_node_pool_node_taints
    enable_auto_scaling    = var.default_node_pool_enable_auto_scaling
    enable_host_encryption = var.default_node_pool_enable_host_encryption
    enable_node_public_ip  = var.default_node_pool_enable_node_public_ip
    max_pods               = var.default_node_pool_max_pods
    max_count              = var.default_node_pool_max_count
    min_count              = var.default_node_pool_min_count
    #node_count             = var.default_node_pool_node_count
    os_disk_type           = var.default_node_pool_os_disk_type
    tags                   = var.tags
  }
...

  lifecycle {
    ignore_changes = [
      kubernetes_version,
      tags
    ]
  }
...

Debug Output/Panic Output

Error: expanding `default_node_pool`: `node_count`(2) must be equal to or less than `max_count`(1) when `enable_auto_scaling` is set to `true`

Expected Behaviour

Auto scaling minimum and maximum node count set to 1

Actual Behaviour

Returns: Error: expanding default_node_pool: node_count(2) must be equal to or less than max_count(1) when enable_auto_scaling is set to true

Steps to Reproduce

1) Set minimum and maximum autoscale not count to 2 and deploy 2) Do not configure default node count at any time 3) Set minimum and maximum autoscale node count to 1 and deploy

Returns: Error: expanding default_node_pool: node_count(2) must be equal to or less than max_count(1) when enable_auto_scaling is set to true

Important Factoids

No response

References

No response

jaohurtas commented 2 years ago

Note also that if I set node_count to 1 I get another error:

│ Error: expanding default_node_pool: cannot change node_count when enable_auto_scaling is set to true

` # (24 unchanged attributes hidden)

  ~ default_node_pool {
      ~ max_count                    = 2 -> 1
      ~ min_count                    = 2 -> 1
        name                         = "system"
      ~ node_count                   = 2 -> 1
        tags                         = {
            "createdWith" = "Terraform"
        }
        # (17 unchanged attributes hidden)
    }

    # (5 unchanged blocks hidden)
}

Plan: 0 to add, 1 to change, 0 to destroy. module.aks_cluster.azurerm_kubernetes_cluster.aks_cluster: Modifying... [id=/subscriptions//resourceGroups//providers/Microsoft.ContainerService/managedClusters/***] ╷ `

maxwell-gregory commented 2 years ago

Try not setting max_count and min_count. When autoscaling is disabled it reads node_count not min_count and max_count. If you want autoscaling then don't specify node_count however if you are setting min to 1 and max to 1 there is no need to set autoscaling

jaohurtas commented 2 years ago

Try not setting max_count and min_count. When autoscaling is disabled it reads node_count not min_count and max_count. If you want autoscaling then don't specify node_count however if you are setting min to 1 and max to 1 there is no need to set autoscaling

I have auto-scaling enabled. I start with both max_count and min_count equal to 2 and need to be able to update those as needed. It is telling me the node_count setting is preventing changes to auto scale settings, and I never specified node_count.

mcleanbc commented 3 months ago

I ran into this issue and think I identified the problem.

So I think this could be fixed by updating https://github.com/hashicorp/terraform-provider-azurerm/blob/main/internal/services/containers/kubernetes_nodepool.go#L1389 to be "if maxCount < count && d.IsNewResource()", matching the behavior on ln 1399 for minCount.

I may look into submitting a fix PR, but if someone wants to beat me to it, feel free :)

github-actions[bot] commented 3 weeks ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.