mumoshu / terraform-provider-helmfile

Deploy Helmfile releases from Terraform
126 stars 20 forks source link

Unable to create K8s cluster and deploy Helmfile release in one go #57

Open anton-kachurin opened 3 years ago

anton-kachurin commented 3 years ago

I have a Terraform module that creates an Azure AKS cluster and outputs kube config to allow connecting to it (see below, available as module.kubernetes_cluster.kube_config).

To the script instantiating this module I added the following:

resource "helmfile_release_set" "app" {
  content = yamlencode({
    releases = [{
      name      = "app"
      namespace = "default"
      chart     = "../app"
      values = [
        "values.yaml",
        {
          secrets = {
            a = local.secret_a
            b = local.secret_b
          }
        },
      ]
    }]
  })

  environment = "default"

  environment_variables = {
    KUBECONFIG = local_file.kube_config.filename
  }
}

resource "local_file" "kube_config" {
  filename          = "${path.root}/.kube/kube_config"
  sensitive_content = module.kubernetes_cluster.kube_config
}

I'm not able to terraform apply it as is. Both terraform plan and terraform apply fail with an error message suggesting that the kube config is missing. That's fair, before the apply there's indeed no config available on the disk, it's only getting generated during the apply. But, helmfile_release_set expects it to be there to do the diff.

Running terraform apply -target=local_file.kube_config first resolves the issue but ideally Terraform scripts shouldn't require splitting them into "stages". I came up with this setup after reading the discussion in #20. Maybe I'm missing something and it isn't supposed to fail if implemented properly? If I understand correctly, the problem here is that Helmfile is executed during terraform plan or the first part of terraform apply to calculate the diff. It shouldn't be necessary to calculate the diff during the resource creation, so if there was a way to suppress it, it would help.

Also, in #20 and #51 the idea of adding configuration options to the provider was brought up. It does seem like this is the standard way of doing it in Terraform. For example, we can find this snippet in the officials docs: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster

provider "kubernetes" {
  host                   = azurerm_kubernetes_cluster.main.kube_config.0.host
  username               = azurerm_kubernetes_cluster.main.kube_config.0.username
  password               = azurerm_kubernetes_cluster.main.kube_config.0.password
  client_certificate     = base64decode(azurerm_kubernetes_cluster.main.kube_config.0.client_certificate)
  client_key             = base64decode(azurerm_kubernetes_cluster.main.kube_config.0.client_key)
  cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.main.kube_config.0.cluster_ca_certificate)
}

It seems like providers can be configured with outputs of resources created in the same script. This probably means that the initialization of the provider and all resources associated with it are postponed until those outputs become available. If it's the case, then it should be possible within this provider to generate a temp file for kube config based on those outputs and feed it to Helmfile during terraform plan or terraform apply.

Any advice on this is highly appreciated. And please let me know if you need to see how other pieces are implemented for fuller picture.