Closed kyma closed 4 days ago
Hi @kyma,
This happens because you create Kubernetes cluster and provision resources on it in one apply. In this case, the Helm provider doesn't get a valid configuration because the data
resource doesn't return anything. It should, thought work on the second run.
We recommend to spit cluster and resources management into different modules or applies.
I hope it helps.
Hey @kyma I'm having the same issue. But I have different code that doesn't rely on a data block.
@arybolovlev thanks for your reply. In my case my code runs just fine, until changing a property of the cluster, it can be as simple as changing a tag. The next run after changing a property is when I get this issue.
Is this too from provisioning resources in a single apply?
Here's my code:
resource "azurerm_kubernetes_cluster" "elastic" {
name = "${var.environment_name}-elk-${var.environment_type}"
location = var.location
resource_group_name = azurerm_resource_group.elastic_cluster.name
node_resource_group = "${var.environment_name}-elk-pool-${var.environment_type}"
dns_prefix = "${var.environment_name}elk${var.environment_type}"
automatic_channel_upgrade = "stable"
sku_tier = var.environment_type == "production" ? "Standard" : "Free"
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_D2as_v5"
tags = merge(
{ component = "system" },
local.tags
)
}
identity {
type = "SystemAssigned"
}
azure_active_directory_role_based_access_control {
managed = true
admin_group_object_ids = var.cluster_admin_group_object_ids
}
local_account_disabled = false
tags = merge(
{ component = "elastic" },
local.tags
)
}
provider "helm" {
kubernetes {
host = azurerm_kubernetes_cluster.elastic.kube_admin_config.0.host
username = azurerm_kubernetes_cluster.elastic.kube_admin_config.0.username
password = azurerm_kubernetes_cluster.elastic.kube_admin_config.0.password
client_certificate = base64decode(azurerm_kubernetes_cluster.elastic.kube_admin_config.0.client_certificate)
client_key = base64decode(azurerm_kubernetes_cluster.elastic.kube_admin_config.0.client_key)
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.elastic.kube_admin_config.0.cluster_ca_certificate)
}
}
resource "helm_release" "elastic_operator" {
name = "elastic-operator"
repository = "https://helm.elastic.co/"
chart = "eck-operator"
namespace = "elastic-system"
create_namespace = true
set {
name = "image.tag"
value = "2.8.0"
}
}
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!
Terraform version, Kubernetes provider version and Kubernetes version
Terraform configuration
Question
I'm running the config above in Terraform Cloud but constantly getting the error:
Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable with helm_release.nginx-ingress on kubernetes.tf line 42, in resource "helm_release" "nginx-ingress":