gruntwork-io / terragrunt

Terragrunt is a flexible orchestration tool that allows Infrastructure as Code written in OpenTofu/Terraform to scale.
https://terragrunt.gruntwork.io/
MIT License
7.99k stars 970 forks source link

kubernetes/helm provider dependencies on local file #1996

Closed icy closed 2 years ago

icy commented 2 years ago

Hi there,

I'm sorry if a similar ticket has been raised already. I did search but couldn't find my expectation. Please correct me.

Basically we have kubeconfig files stored on aws/ssm-parameter store. And we generate local file then give them to kubernetes provider, as below

# Terragrunt module A
locals {
  k3s_config_ssm_path = "config.yaml"
}

data "aws_ssm_parameter" "k3s" {
  name            = "/some/path/on/aws/ssm"
  with_decryption = true
}

resource "local_file" "k3s_config" {
  content         = data.aws_ssm_parameter.k3s.value
  filename        = local.k3s_config_local_path
  file_permission = "0644"
}

From Terragrunt module B which depends on module A

# terragrunt.hcl has block that declares a dependency on A.

provider "kubernetes" {
  config_path = local.k3s_config_local_path
}

As far as I know, terragrunt run-all apply will work well, however, terragrunt run-all plan doesn't work on a fresh check-out of the source tree. Because the local file doesn't exist before kubernetes provider starts its job.

Is there any work-around for this problem with helps from terragrunt?

Thanks for your reading.

denis256 commented 2 years ago

Hi, I was thinking that usage of output variables and mock_outputs on dependencies can help

Simplified example:

├── a
│   ├── main.tf
│   └── terragrunt.hcl
└── b
    ├── main.tf
    └── terragrunt.hcl
# a/main.tf
provider "aws" {
  region  = "us-east-1"
}
data "aws_ssm_parameter" "k3s" {
  name = "test"
  with_decryption = true
}
resource "local_file" "k3s_config" {
  content         = "module a config"
  filename        = data.aws_ssm_parameter.k3s.value
  file_permission = "0644"
}
output "config_file" {
  value = local_file.k3s_config.filename
  sensitive = true
}
# b/terragrunt.hcl
dependency "a" {
  config_path = "../a"
  mock_outputs = {
    config_file = "file.yaml"
  }
}
inputs = {
  k8s_config = dependency.a.outputs.config_file
}
# b/main.tf
variable "k8s_config" {
  type = string
}
resource "local_file" "provider_file" {
  content         = "k8s_config  ${var.k8s_config}"
  filename        = "module_b_provider_file.txt"
  file_permission = "0644"
}

and run terragrunt run-all plan first time, will get:

...
<module a output>
Terraform will perform the following actions:

  # local_file.k3s_config will be created
  + resource "local_file" "k3s_config" {
      + content              = "module a config"
      + directory_permission = "0777"
      + file_permission      = "0644"
      + filename             = (sensitive)
      + id                   = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + config_file = (sensitive value)
...
<module b output>
Terraform will perform the following actions:

  # local_file.provider_file will be created
  + resource "local_file" "provider_file" {
      + content              = "k8s_config  file.yaml"
      + directory_permission = "0777"
      + file_permission      = "0644"
      + filename             = "module_b_provider_file.txt"
      + id                   = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

So it will reference value ${var.k8s_config} from A to B

icy commented 2 years ago

Hi, I was thinking that usage of output variables and mock_outputs on dependencies can help

@denis256 Thanks for you feedback. I think the problem with mocking is that it doesn't give any real data. At the planning time, helm/kubernetes provider is actually requiring some real configuration to access to the cluster. Without a correct configuration for k8s, helm likely reports an error attempting to access k8s api. This is kinda an annoying thing.

What I'm doing now is to execute a simple command to generate k8s configuration, which is an alternative to the terragrunt apply from the module A (FYI, I'm using before_script within a Gitlab build context.) I think it's great if terragrunt can give some pre-hook command and/or to execute an actual terragrunt apply command before continuing. But it's my 2 cents maybe there is any better way.

Edit: fix typo errrors (but there are still some:))

denis256 commented 2 years ago

Hi, most probably hooks in terragrunt.hcl can be used to get a similar result:

# terragrunt.hcl
terraform {
  before_hook "fetch_kubernetes_config" {
    commands = ["apply", "plan"]
    execute  = ["echo", "Fetching Kubernetes config"]
  }
}

https://terragrunt.gruntwork.io/docs/reference/config-blocks-and-attributes/#terraform

icy commented 2 years ago

@denis256 That's a nice idea. I will look how it works with our workflow. Thanks for your information.

icy commented 2 years ago

Sorry for this late feedback. We now completely forget the local file and use provider's parameter instead. It's not very convenient in some way (look at [0]), but it works in our case. The whole code is as below; we hope that helps.

locals {
  k3s_config_ssm_path = "/some/path/on/aws/ssm/config.yaml"
}

data "aws_ssm_parameter" "k3s" {
  name            = local.k3s_config_ssm_path
  with_decryption = true
}

locals {
  k8s_config_as_json = yamldecode(data.aws_ssm_parameter.k3s.value)
  k8s_config = {
    host                   = local.k8s_config_as_json["clusters"][0]["cluster"]["server"]
    cluster_ca_certificate = base64decode(local.k8s_config_as_json["clusters"][0]["cluster"]["certificate-authority-data"])
    client_certificate     = base64decode(local.k8s_config_as_json["users"][0]["user"]["client-certificate-data"])
    client_key             = base64decode(local.k8s_config_as_json["users"][0]["user"]["client-key-data"])
  }
}

provider "kubernetes" {
  host                   = local.k8s_config["host"]
  cluster_ca_certificate = local.k8s_config["cluster_ca_certificate"]
  client_certificate     = local.k8s_config["client_certificate"]
  client_key             = local.k8s_config["client_key"]
}

provider "helm" {
  kubernetes {
    host                   = local.k8s_config["host"]
    cluster_ca_certificate = local.k8s_config["cluster_ca_certificate"]
    client_certificate     = local.k8s_config["client_certificate"]
    client_key             = local.k8s_config["client_key"]
  }
}