Open matthiasritter opened 1 year ago
I think I found the magic sauce to make this problem go away:
workload_autoscaler_profile {
keda_enabled = false
vertical_pod_autoscaler_enabled = false
}
(coworker of @matthiasritter here 👋) We came back to this because we also needed vpa, and we found the reason for this behaviour and a workaround.
This is because the Azure API returns
workloadAutoScalerProfile": {}
if keda or vpa has never been enabled on this cluster before. But once you enable one of the features and disable it later, the Azure API returns
workloadAutoScalerProfile": {
"keda": {
"enabled": false
}
}
instead of an empty block.
If you omit the workload_autoscaler_profile
on a cluster that previously had an autoscaler enabled, the Azure API state will be "enabled": false
, but terraform will try to set it to null (leaving it in the same state).
On the other hand, if you set it to false in your code on a cluster that has never had an autoscaler enabled before, the Azure API state will be empty, but terraform will try to set it to false
(which leaves it empty in the Azure API).
You can work around this with the following code:
resource "azurerm_kubernetes_cluster" "aks" {
[...]
dynamic "workload_autoscaler_profile" {
for_each = var.vpa_enable != null || var.keda_enable != null ? [1] : []
content {
keda_enabled = var.keda_enable
vertical_pod_autoscaler_enabled = var.vpa_enable
}
}
[...]
}
variable "keda_enable" {
type = bool
default = null
}
variable "vpa_enable" {
type = bool
default = null
}
(both variables default to null
).
With this code you can
Hope this helps.
Is there an existing issue for this?
Community Note
We have used the feature directly, unfortunately we could not set all settings there, so we removed the block "workload_autoscaler_profile". But if keda was enabled on a cluster, it has to be set to "false" or true". If we remove the block, it will be set to null, but on Azure-Side it is magically set back to "false". So on the next run, terraform will set it back to null and Azure to false and so on. On new AKS-Clusters it is the same way, but with the "false"-Value. If keda wasn't enabled and we set it via terraform to false, it will be set magically to "null"(from Azure). On the next terraform-run it will be set again to "false" with our terraform-run and azure (again) will set it to "null".
Terraform Version
1.5.2
AzureRM Provider Version
3.63.0
Affected Resource(s)/Data Source(s)
azurerm_kubernetes_cluster
Terraform Configuration Files
Debug Output/Panic Output
Expected Behaviour
terraform should not change the state.
Actual Behaviour
No response
Steps to Reproduce
No response
Important Factoids
No response
References
No response