hashicorp / terraform-provider-helm

Terraform Helm provider
https://www.terraform.io/docs/providers/helm/
Mozilla Public License 2.0
997 stars 367 forks source link

Kubernetes cluster unreachable: invalid configuration: no configuration has been provided #1181

Closed kyma closed 4 days ago

kyma commented 1 year ago

Terraform version, Kubernetes provider version and Kubernetes version

Terraform version: 1.5.0
Helm Provider version: 2.10.1
Kubernetes version: 1.25.6

Terraform configuration

resource "azurerm_kubernetes_cluster" "My_Test" {
  name                = "Cluster_My_Test"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  dns_prefix          = "My_Test"

  default_node_pool {
    name       = "default"
    node_count = 1
    vm_size    = "Standard_D2_v2"
  }

  identity {
    type = "SystemAssigned"
  }

  role_based_access_control_enabled = true

  tags = {
    Environment = "Development"
  }
}

data "azurerm_kubernetes_cluster" "My_Test" {
  name                = azurerm_kubernetes_cluster.My_Test.name
  resource_group_name = azurerm_kubernetes_cluster.My_Test.resource_group_name
  depends_on          = [azurerm_kubernetes_cluster.My_Test]
}

provider "helm" {

  kubernetes {
    config_context         = "default"
    host                   = data.azurerm_kubernetes_cluster.My_Test.kube_config.0.host
    client_certificate     = base64decode(data.azurerm_kubernetes_cluster.My_Test.kube_config.0.client_certificate)
    client_key             = base64decode(data.azurerm_kubernetes_cluster.My_Test.kube_config.0.client_key)
    cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.My_Test.kube_config.0.cluster_ca_certificate)
  }

}

resource "helm_release" "nginx-ingress" {
  name       = "nginx"
  repository = "https://kubernetes.github.io/ingress-nginx"
  chart      = "ingress-nginx"
  set {
    name  = "controller.replicaCount"
    value = "1"
  }

  depends_on = [data.azurerm_kubernetes_cluster.My_Test]
}

Question

I'm running the config above in Terraform Cloud but constantly getting the error:

Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable with helm_release.nginx-ingress on kubernetes.tf line 42, in resource "helm_release" "nginx-ingress":

arybolovlev commented 1 year ago

Hi @kyma,

This happens because you create Kubernetes cluster and provision resources on it in one apply. In this case, the Helm provider doesn't get a valid configuration because the data resource doesn't return anything. It should, thought work on the second run.

We recommend to spit cluster and resources management into different modules or applies.

I hope it helps.

kiweezi commented 1 year ago

Hey @kyma I'm having the same issue. But I have different code that doesn't rely on a data block.

@arybolovlev thanks for your reply. In my case my code runs just fine, until changing a property of the cluster, it can be as simple as changing a tag. The next run after changing a property is when I get this issue.

Is this too from provisioning resources in a single apply?

Here's my code:

resource "azurerm_kubernetes_cluster" "elastic" {
  name                = "${var.environment_name}-elk-${var.environment_type}"
  location            = var.location
  resource_group_name = azurerm_resource_group.elastic_cluster.name
  node_resource_group = "${var.environment_name}-elk-pool-${var.environment_type}"
  dns_prefix          = "${var.environment_name}elk${var.environment_type}"
  automatic_channel_upgrade = "stable"

  sku_tier = var.environment_type == "production" ? "Standard" : "Free"

  default_node_pool {
    name       = "default"
    node_count = 1
    vm_size    = "Standard_D2as_v5"

    tags = merge(
      { component = "system" },
      local.tags
    )
  }

  identity {
    type = "SystemAssigned"
  }

  azure_active_directory_role_based_access_control {
    managed = true
    admin_group_object_ids = var.cluster_admin_group_object_ids
  }
  local_account_disabled = false

  tags = merge(
    { component = "elastic" },
    local.tags
  )
}

provider "helm" {
  kubernetes {
    host                   = azurerm_kubernetes_cluster.elastic.kube_admin_config.0.host
    username               = azurerm_kubernetes_cluster.elastic.kube_admin_config.0.username
    password               = azurerm_kubernetes_cluster.elastic.kube_admin_config.0.password
    client_certificate     = base64decode(azurerm_kubernetes_cluster.elastic.kube_admin_config.0.client_certificate)
    client_key             = base64decode(azurerm_kubernetes_cluster.elastic.kube_admin_config.0.client_key)
    cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.elastic.kube_admin_config.0.cluster_ca_certificate)
  }
}

resource "helm_release" "elastic_operator" {
  name             = "elastic-operator"
  repository       = "https://helm.elastic.co/"
  chart            = "eck-operator"
  namespace        = "elastic-system"
  create_namespace = true

  set {
    name  = "image.tag"
    value = "2.8.0"
  }
}
github-actions[bot] commented 1 month ago

Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!