Closed DoubleTK closed 1 year ago
Hi @DoubleTK , I've tested the following code under v2.87.0:
provider "azurerm" {
features {
}
}
resource "azurerm_resource_group" "example" {
name = "zjhe-f12991"
location = "West Europe"
}
resource "azurerm_kubernetes_cluster" "example" {
name = "zjhe-f12991"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
dns_prefix = "exampleaks1"
kubernetes_version = "1.20.7"
default_node_pool {
orchestrator_version = "1.19.7"
name = "agentpool"
node_count = 1
vm_size = "Standard_D2_v2"
}
# identity {
# type = "SystemAssigned"
# }
service_principal {
client_id = "##############"
client_secret = "##############"
}
tags = {
Environment = "Production"
}
}
There's some issue with identity
on my machine so I used service_principal
, as you can see my kubernetes_version
is different than orchestrator_version
. After applied:
azurerm_kubernetes_cluster.example: Still creating... [4m20s elapsed]
azurerm_kubernetes_cluster.example: Creation complete after 4m21s [id=/subscriptions/########/resourceGroups/zjhe-f12991/prov
iders/Microsoft.ContainerService/managedClusters/zjhe-f12991]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
PS D:\project\f-12991> terraform console
> azurerm_kubernetes_cluster.example.default_node_pool[0].orchestrator_version
"1.20.7"
The orchestrator_version
now is as same as kubernetes_version
right after apply. Would you please have a double check with the latest version?
Closing since we haven't heard back. If this problem still occurs with the latest provider version please feel free to open an issue.
Thanks!
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
I'm attempting to upgrade the default node pool k8s version using Terraform's azurerm_kubernetes_cluster, but it's telling me there is nothing to change. Here are the relevant details:
Terraform and azurerm versions:
tfstate file (redacted, pulled from Terraform Cloud)
Resource definition
Variables
Experimental features
Issue
Specifying
kubernetes_version
worked to upgrade the k8s control plan within Azure, but the default node pool did not update. After a bit more research I found theorchestrator_version
, and to my understanding this should upgrade the k8s version for the virtual machine scale set.When I run
terraform plan
, it tells me "Your infrastructure matches the configuration", even though tfstate clearly shows orchestrator_version as "1.19.7", and my variable (I've tried hard coding as well) is set to "1.20.7".Expected Behavior
The default node pool of the cluster is upgraded from 1.19.7 to 1.20.7.
Actual Behavior
No configuration changes are detected for the default node pool.
Steps to reproduce
I'm not entirely sure. Our cluster is currently in this state. I have not manually upgraded the node pool through the portal yet. I wanted to see if there was a solution to this issue before changing the environment manually.