hashicorp / terraform-provider-azurerm

Terraform provider for Azure Resource Manager
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs
Mozilla Public License 2.0
4.6k stars 4.64k forks source link

Autoscaling enabled AKS Cluster leads to error on terraform apply, tough no changes on aks planned #4075

Closed luc1f4 closed 4 years ago

luc1f4 commented 5 years ago

Community Note

Terraform (and AzureRM Provider) Version

provider "azurerm" {
  version = "<= 1.32.0"
}

terraform {
  version = "0.11.14"
}

Affected Resource(s)

Terraform Configuration Files

resource "azurerm_kubernetes_cluster" "k8s" {
  name                = "${local.name}"
  location            = "${var.region}"
  resource_group_name = "${azurerm_resource_group.default.name}"
  tags                = "${local.tags}"

  dns_prefix = "dns-${local.name}"

  agent_pool_profile {
    name                = "default"
    count               = "2"
    vm_size             = "Standard_F4s_v2"
    enable_auto_scaling = true
    min_count           = "2"
    max_count           = "10"
    os_type             = "Linux"
    os_disk_size_gb     = 30
    vnet_subnet_id      = "${azurerm_subnet.default.id}"
    type                = "VirtualMachineScaleSets"
  }

  addon_profile {
    http_application_routing {
      enabled = true
    }
  }

  network_profile {
    network_plugin = "azure"
  }

  service_principal {
    client_id     = "${var.sp_client_id}"
    client_secret = "${var.sp_client_secret}"
  }

  linux_profile {
    admin_username = "kubernetes"

    ssh_key {
      key_data = "${var.ssh_public_key}"
    }
  }
}

Terraform Plan Output

$ terraform plan --out terraformplan

Refreshing Terraform state in-memory prior to plan...

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:

~
 update in-place

Terraform will perform the following actions:

~

azurerm_kubernetes_cluster.k8s
      tags.IacVersion:  "" => "v1.4.0"

~

kubernetes_config_map.deployment_config_map
      data.DEPLOYTIME: "2019-08-13T11:47:25Z" => "2019-08-13T14:51:00Z"

Plan:
 0 to add, 2 to change, 0 to destroy.

------------------------------------------------------------------------

This plan was saved to: tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "terraformplan"

Panic Output


$ terraform apply "terraformplan"

azurerm_kubernetes_cluster.k8s: Modifying...

  tags.IacVersion: "" => "v1.4.0"

Error: 
Error applying plan:

1 error occurred:
    * azurerm_kubernetes_cluster.k8s: 1 error occurred:
    * azurerm_kubernetes_cluster.k8s: Error creating/updating Managed Kubernetes Cluster "" (Resource Group ""): containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidParameter" Message="Cannot manually scale AgentPool 'default' of an AKS cluster with cluster autoscaler enabled. Please disable cluster autoscaler in the cluster to manually scale."

Expected Behavior

As the terraform plan suggested no changes (only changes in the tags), the azurerm_kubernetes_cluster ressource should be stay untouched

Actual Behavior

Steps to Reproduce

  1. Make an iac change, that does not lead to any resource changes on azurerm_kubernetes_cluster
  2. terraform plan --out tfplan
  3. terraform apply tfplan
invidian commented 5 years ago

I can confirm, that creating AKS cluster and then trying to add tags to it currently fails with mentioned error message. However, the issue title is misleading, as there are changes planned to AKS resource.

The same thing actually happens, when you try changing Kubernetes version, where the same change works just fine via Azure CLI.

invidian commented 5 years ago

I extracted the code from the provider to apply simple AKS updates and they seem to work, so something in provider implementation must be breaking it: https://gist.github.com/invidian/c20df813df64df0ce0ddb0d68df79b53

invidian commented 5 years ago

Added some debug messages and it seems that expandKubernetesClusterAgentPoolProfiles is currently mutating the pools and that's why the update is rejected:

2019-09-05T11:04:45.017+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4: AgentPoolProfiles:
2019-09-05T11:04:45.017+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:   0:
2019-09-05T11:04:45.017+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     Name: default
2019-09-05T11:04:45.017+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     Count: 2
2019-09-05T11:04:45.017+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     VMSize: Standard_D1_v2
2019-09-05T11:04:45.017+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     OsDiskSizeGB: 30
2019-09-05T11:04:45.017+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     VnetSubnetID: /subscriptions/s/resourceGroups/sproviders/Microsoft.Network/virtualNetworks/s/subnets/s
2019-09-05T11:04:45.017+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     MaxPods: 110
2019-09-05T11:04:45.017+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     OsType: Linux
2019-09-05T11:04:45.017+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     MaxCount: 5
2019-09-05T11:04:45.017+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     MinCount: 2
2019-09-05T11:04:45.017+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     EnableAutoScaling: true
2019-09-05T11:04:45.017+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     Type: VirtualMachineScaleSets
2019-09-05T11:04:45.017+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     OrchestratorVersion: 1.14.6
2019-09-05T11:04:45.017+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     ProvisioningState: Succeeded
2019-09-05T11:04:45.017+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     EnableNodePublicIP: false
2019-09-05T11:04:45.017+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     ScaleSetPriority:
2019-09-05T11:04:45.017+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     ScaleSetEvictionPolicy:
2019-09-05T11:04:45.017+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4: New AgentPoolProfiles:
2019-09-05T11:04:45.017+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:   0:
2019-09-05T11:04:45.018+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     Name: default
2019-09-05T11:04:45.018+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     VMSize: Standard_D1_v2
2019-09-05T11:04:45.018+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     OsDiskSizeGB: 30
2019-09-05T11:04:45.018+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     VnetSubnetID: /subscriptions/s/resourceGroups/s/providers/Microsoft.Network/virtualNetworks/s/subnets/s
2019-09-05T11:04:45.018+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     MaxPods: 110
2019-09-05T11:04:45.018+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     OsType: Linux
2019-09-05T11:04:45.018+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     MaxCount: 5
2019-09-05T11:04:45.018+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     MinCount: 2
2019-09-05T11:04:45.018+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     EnableAutoScaling: true
2019-09-05T11:04:45.018+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     Type: VirtualMachineScaleSets
2019-09-05T11:04:45.018+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     ScaleSetPriority:
2019-09-05T11:04:45.018+0200 [DEBUG] plugin.terraform-provider-azurerm_v1.33.1_x4:     ScaleSetEvictionPolicy:
invidian commented 5 years ago

Created PR with fixes for it #4256 :)

raphaelquati commented 5 years ago

Any updates?

giggio commented 5 years ago

Is there a workaround until this is merged and released? I really don't want to recreate my cluster.

brianxieseattle commented 5 years ago

When will be all the related fixes checked in? Is there any workaround before the code fix is made?

evmimagina commented 5 years ago

Hi, I think I'm still affected by this issue… is it merged to the master Branch? which AzureRM version should I use?

Thanks,

ghost commented 4 years ago

This has been released in version 1.37.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example:

provider "azurerm" {
    version = "~> 1.37.0"
}
# ... other configuration ...
ghost commented 4 years ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error πŸ€– πŸ™‰ , please reach out to my human friends πŸ‘‰ hashibot-feedback@hashicorp.com. Thanks!