gavinbunney / terraform-provider-kubectl

Terraform provider to handle raw kubernetes manifest yaml files
https://registry.terraform.io/providers/gavinbunney/kubectl
Mozilla Public License 2.0
609 stars 102 forks source link

yaml change causes destroy -> create instead of in place. #275

Open yctn opened 11 months ago

yctn commented 11 months ago

every time i change something in the yaml it tries to destroy the yaml and recreate it. it never actualy in-place replace it. this causes operators to do a full restart of clusters sometimes.

i suspect that is because the full yaml is in the terraform state. is this expected? or am i simply doing something wrong here? my tf code is as following.

`data "kubectl_path_documents" "kafka-yaml" { pattern = "./manifest/kafka/*.yaml.tpl" }

resource "kubectl_manifest" "kafka-GRA7" { provider = kubectl.GRA7 for_each = toset(data.kubectl_path_documents.kafka-yaml.documents) yaml_body = each.value wait = false wait_for_rollout = false }`

alekc commented 11 months ago

As discussed in https://github.com/alekc/terraform-provider-kubectl/issues/50, a proper way to do it is to use manifests and not documents. I.e.

data "kubectl_path_documents" "manifests-directory-yaml" {
  pattern = "./manifests/*.yaml"
}
resource "kubectl_manifest" "directory-yaml" {
  for_each  = data.kubectl_path_documents.manifests-directory-yaml.manifests
  yaml_body = each.value
}

Documentation has been updated on the fork