Open sunilnagavelli opened 1 year ago
Thanks for opening an issue @sunilnagavelli. It looks like you are trying to create a Kubernetes cluster then create Kubernetes resources in the same apply operator – this is known to cause strange issues when trying to authenticate with the cluster. Our documentation recommends to split these so that your cluster creation and Kubernetes resource creation happen in separate apply runs.
I am actually facing a very similar issue and in my setup I have the cluster created already for a while but authentication for the kubernetes provider still fails.
What is very similar in my setup is that I am (also) using an Azure Service Principal to deploy the AKS cluster and based on this I think @sunilnagavelli is using one too:
provider "azurerm" {
tenant_id = var.tenant_id
client_id = var.client_id
client_secret = var.client_secret
subscription_id = var.subscription_id
features {}
}
Furthermore Azure AD RBAC is enabled for the AKS cluster here which I have as well:
azure_active_directory_role_based_access_control {
managed = true
azure_rbac_enabled = true
#admin_group_object_ids = values(var.aks_admin_groups_aad)
}
I am not making use of any admin groups but instead have assigned the necessary Azure AD role (RBAC Admin) directly to the service principal so that it has the permissions to e.g. create the namespace. I can only assume that @sunilnagavelli has added their service principal to the admin group specified above so that it also has the required permission.
So the remaining question is: how do I authentication in the kubernetes provider using an Azure AD Service Principal which typically has a client_id and client_secret property ?
This might be solving the issue, I will try this now also:
https://github.com/hashicorp/terraform-provider-kubernetes/issues/2072#issuecomment-1508197008
Yes, so with a provider initialization like the following I can confirm it will work when Azure AD RBAC is enabled and things are deployed through a Service Principal:
provider "kubernetes" {
host = format("%s://%s:%s", "https", azurerm_kubernetes_cluster.kubernetes_cluster.fqdn, "443")
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.kubernetes_cluster.kube_config[0].cluster_ca_certificate)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "/usr/local/bin/kubelogin"
args = [
"get-token",
"--login",
"spn",
"--environment",
"AzurePublicCloud",
"--tenant-id",
data.azurerm_client_config.current.tenant_id,
"--server-id",
data.azuread_service_principal.aks_server_sp.client_id,
"--client-id",
data.azurerm_client_config.current.client_id,
"--client-secret",
data.azurerm_key_vault_secret.sp_secret.value,
]
}
}
Btw. I have to use a different value for the host name since we are using private API server VNet integration feature also.
Terraform Version, Provider Version and Kubernetes Version
Affected Resource(s)
Terraform Configuration Files
Azure Kubernetes Service
Provider Config:
AKS Cluster Resource Definition:
Debug Output
Panic Output
Steps to Reproduce
Expected Behavior
What should have happened?
AKS cluster with the kubernetes namespace should be created.
Actual Behavior
What actually happened?
Namespace creation failed with unauthorized error
Important Factoids
References
Community Note