gavinbunney / terraform-provider-kubectl

Terraform provider to handle raw kubernetes manifest yaml files
https://registry.terraform.io/providers/gavinbunney/kubectl
Mozilla Public License 2.0
612 stars 105 forks source link

for_each on kubectl_path_documents errors with The "for_each" value depends on resource attributes that cannot be determined until apply #215

Open bluebrown opened 1 year ago

bluebrown commented 1 year ago

Hi, I am trying to use the provider but terraform errors:

 Error: Invalid for_each argument
│
│   on ..\..\..\..\modules\kube\kyverno.tf line 26, in resource "kubectl_manifest" "kyverno_policies":
│   26:   for_each   = toset(data.kubectl_path_documents.kyverno_policy_manifests.documents)
│     ├────────────────
│     │ data.kubectl_path_documents.kyverno_policy_manifests.documents is a list of string, known only after apply
│
│ The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to  
│ first apply only the resources that the for_each depends on.

I followed the exmple from the docs:

data "kubectl_path_documents" "kyverno_policy_manifests" {
  pattern = "${path.module}/manifests/kyverno/*.yaml"
}

resource "kubectl_manifest" "kyverno_policies" {
  depends_on = [helm_release.kyverno]
  for_each   = toset(data.kubectl_path_documents.kyverno_policy_manifests.documents)
  yaml_body  = each.value
}

Its strange because I have this in a module and it only fails in 1 out of 2 places where the module is used.

severity1 commented 1 year ago

Also experiencing this on kubectl_file_documents can someone shed some light on this? even count behaves the same my work around is to statically set the count which is counter intuitive.

EDIT: Sorry, I tried again just now and it seems to be working fine.

data "http" "nodelocaldns-raw" {
  url = "https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml"
}

data "kubectl_file_documents" "nodelocaldns-doc" {
    content = data.http.nodelocaldns-raw.response_body
}

resource "kubectl_manifest" "nodelocaldns" {
    for_each  = data.kubectl_file_documents.nodelocaldns-doc.manifests
    yaml_body = each.value
}
kin3303 commented 1 year ago

I have same issue..

data "http" "get_cwagent_serviceaccount" {
  url = "https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/cwagent/cwagent-serviceaccount.yaml"
  # Optional request headers
  request_headers = {
    Accept = "text/*"
  }
}

data "kubectl_file_documents" "cwagent_docs" {
  content = data.http.get_cwagent_serviceaccount.body
}

resource "kubectl_manifest" "cwagent_serviceaccount" {
  count     = length(data.kubectl_file_documents.cwagent_docs.manifests)
  yaml_body = element(data.kubectl_file_documents.cwagent_docs.manifests, count.index)

  depends_on = [
    kubernetes_namespace_v1.amazon_cloudwatch
  ]
}

│ Error: Invalid count argument │ │ on modules\terraform-aws-eks-logging\cloudwatch_agent.tf line 24, in resource "kubectl_manifest" "cwagent_serviceaccount": │ 24: count = length(data.kubectl_file_documents.cwagent_docs.manifests) │ │ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first
│ apply only the resources that the count depends on. ╵

80kk commented 1 year ago

Same issue here. Terraform 1.3.4, but had this issue also on 1.1.x.

count:

data "kubectl_path_documents" "cluster_autoscaler" {
#    pattern = file("${path.module}/templates/cluster-autoscaler.yaml")
    pattern = "${path.module}/templates/cluster-autoscaler.yaml"
    vars = {
      CLUSTER_AUTOSCALER_ROLE_ARN = aws_iam_role.cluster_autoscaler.arn
      CLUSTER_AUTOSCALER_IMAGE_TAG = "v1.22.2"
      CLUSTER_NAME = var.cluster_name
    }
}

resource "kubectl_manifest" "cluster_autoscaler" {
  count = length(data.kubectl_path_documents.cluster_autoscaler.documents)
  yaml_body = element(data.kubectl_path_documents.cluster_autoscaler.documents, count.index)
}

gives:

╷
│ Error: Invalid count argument
│ 
│   on ../modules/eks/cluster-autoscaler.tf line 97, in resource "kubectl_manifest" "cluster_autoscaler":
│   97:   count = length(data.kubectl_path_documents.cluster_autoscaler.documents)
│ 
│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources
│ that the count depends on.
╵

while for_each returns:

│ Error: Invalid for_each argument
│ 
│   on ../modules/eks/cluster-autoscaler.tf line 94, in resource "kubectl_manifest" "cluster_autoscaler":
│   94:   for_each  = toset(data.kubectl_path_documents.cluster_autoscaler.documents)
│     ├────────────────
│     │ data.kubectl_path_documents.cluster_autoscaler.documents is a list of string, known only after apply
│ 
│ The "for_each" set includes values derived from resource attributes that cannot be determined until apply, and so Terraform cannot determine the full set of keys that will identify the instances of this resource.
│ 
│ When working with unknown values in for_each, it's better to use a map value where the keys are defined statically in your configuration and where only the values contain apply-time results.
│ 
│ Alternatively, you could use the -target planning option to first apply only the resources that the for_each value depends on, and then apply a second time to fully converge.
eugeneaik commented 1 year ago

https://registry.terraform.io/providers/gavinbunney/kubectl/latest/docs/data-sources/kubectl_path_documents#load-all-manifest-documents-via-for_each-recommended The doc is wrong and should be fixed to

resource "kubectl_manifest" "test" {
    for_each  = data.kubectl_path_documents.docs.manifests
    yaml_body = each.value
}
litan1106 commented 1 year ago

I ran into the same issue. This error seems common when you have 2 or more resources in a module. Create a new file in the module and move a resource over seems to fix it as well. lol

The "for_each" map includes keys derived from resource attributes that cannot be determined until apply, and so Terraform cannot determine the full set of keys that │ will identify the instances of this resource

ozahavi commented 1 year ago

I ran into the same issue. This error seems common when you have 2 or more resources in a module. Create a new file in the module and move a resource over seems to fix it as well. lol

The "for_each" map includes keys derived from resource attributes that cannot be determined until apply, and so Terraform cannot determine the full set of keys that │ will identify the instances of this resource

What TF version are you using, I tried to split the files as you recommended, and I am still encountering this issue. The weird part is in a different module that I have this is working, it is as if the failure is inconsistent.

jaimehrubiks commented 1 year ago

I'm finding this issue too. It used to work for a while but now it doesn't :(

justinb-shipt commented 1 year ago

https://developer.hashicorp.com/terraform/language/meta-arguments/for_each#limitations-on-values-used-in-for_each

bluebrown commented 1 year ago

@justinb-shipt , well, it used to work. One day it just broke. Even more strange, it broke just in some places but not in others.

Anyway, I just dropped this provider and use other tools now.

justinb-shipt commented 1 year ago

@bluebrown, I agree. I'm using it to manage the CRDs for our service mesh implementation and it's still working fine on ~20 other clusters and broke on 2. Very strange. I just wanted to reference the Hashicorp doc, since nobody else has, and possibly dig into the code.

jaimehrubiks commented 1 year ago

I made up a hacky solution that involves replacing this provider with the helm provider, which always works.

You can just put a directory (i.e. myhelmchart) next you your terraform files with a filed "Chart.yaml" (google an example of it and copy it) and a directory called "templates" where you put your yaml files. Then using helm provider just point the "repository" to "${path.module}/myhelmchart/"


Additionally, because my files were super templated (using terraform's templatefile) instead of helm templates, I opted to pass the yaml directly from terraform to helm. I don't love it but it was extremely quicker for now as I could switch from kubectl provider to helm instantly.

To do this, I pass a variable with helm using:

  set {
    name  = "myyaml"
    value = base64encode(templatefile("${path.module}/my-yaml-files.yaml",{
      variables = var.variables
    }))
  }

And then I decode from helm by putting this into the templates/files.yaml

{{ .Values.myyaml | b64dec }}

Hope it helps someone

d-m commented 1 year ago

I managed to get this working by looping over fileset instead:

data "kubectl_path_documents" "kyverno_policy_manifests" {
  pattern = "${path.module}/manifests/kyverno/*.yaml"
}

resource "kubectl_manifest" "kyverno_policies" {
  depends_on = [helm_release.kyverno]
  for_each   = length(fileset("${path.module}/manifests/kyverno", "*.yaml"))
  yaml_body  = element(data.kubectl_path_documents.kyverno_policy_manifests.documents, count.index)
}
bluebrown commented 1 year ago

Given the lack of a comment from the maintainer, despite the high amount of users facing the problem. I think it's safe to say that this provider is not maintained anymore.

sebv004 commented 1 year ago

On my side, I got this issue with a depends_on in my data "kubectl_file_documents" ... I removed it and it works like a charm ...

In my terraform project, I ve 2 different files with these documents/manifest...

Elegant996 commented 1 year ago

I managed to get this working by looping over fileset instead:

data "kubectl_path_documents" "kyverno_policy_manifests" {
  pattern = "${path.module}/manifests/kyverno/*.yaml"
}

resource "kubectl_manifest" "kyverno_policies" {
  depends_on = [helm_release.kyverno]
  for_each   = length(fileset("${path.module}/manifests/kyverno", "*.yaml"))
  yaml_body  = element(data.kubectl_path_documents.kyverno_policy_manifests.documents, count.index)
}

How did you run a for_each and then use a count? Terraform prevents this by default, no?

marcinprzybysz86 commented 1 year ago

I managed to get this working by looping over fileset instead:

data "kubectl_path_documents" "kyverno_policy_manifests" {
  pattern = "${path.module}/manifests/kyverno/*.yaml"
}

resource "kubectl_manifest" "kyverno_policies" {
  depends_on = [helm_release.kyverno]
  for_each   = length(fileset("${path.module}/manifests/kyverno", "*.yaml"))
  yaml_body  = element(data.kubectl_path_documents.kyverno_policy_manifests.documents, count.index)
}

I got:

resource "kubectl_manifest" "argocd" {
  for_each   = length(fileset("${path.module}/manifests/", "*.yaml"))
  yaml_body  = element(data.kubectl_path_documents.docs.documents, count.index)
  override_namespace = "argocd"
}

and as a result:

Releasing state lock. This may take a few moments... ╷ │ Error: Invalid for_each argument │ │ on modules\helm_argocd\main.tf line 7, in resource "kubectl_manifest" "argocd": │ 7: for_each = length(fileset("${path.module}/manifests/", "*.yaml")) │ ├──────────────── │ │ path.module is "modules/helm_argocd" │ │ The given "for_each" argument value is unsuitable: the "for_each" argument │ must be a map, or set of strings, and you have provided a value of type │ number. ╵ ╷ │ Error: Reference to "count" in non-counted context │ │ on modules\helm_argocd\main.tf line 8, in resource "kubectl_manifest" "argocd": │ 8: yaml_body = element(data.kubectl_path_documents.docs.documents, count.index) │ │ The "count" object can only be used in "module", "resource", and "data" │ blocks, and only when the "count" argument is set.

ivankovnatsky commented 1 year ago

I've created a local variable map of strings. Works fine for me.

locals {
  kube_event_exporter_manifests = {
    configmap = <<-YAML
apiVersion: v1
kind: ConfigMap
metadata:
...
YAML
    deployment = <<-YAML
apiVersion: apps/v1
kind: Deployment
metadata:
  name: event-exporter
  namespace: ${kubernetes_namespace.kube_event_exporter.metadata[0].name}
...
YAML
  }
}

resource "kubectl_manifest" "kube_event_exporter" {
  for_each  = local.kube_event_exporter_manifests
  yaml_body = each.value
}
eugeneaik commented 1 year ago

Official kubernetes provider already supports creating an any manifest - https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/manifest

boillodmanuel commented 9 months ago

Hi,

I figured out that using depends_on will often produces the same cannot be determined until apply error, with data sources (not only kubectl_path_documents).

See:

This looks more like an internal terraform issue.

Similar to #61

I opened an issue on terraform directly: https://github.com/hashicorp/terraform/issues/34391

bensoer commented 5 months ago

I've managed to implement the kubectl_path_documents functionality using terraform basic functions, and kind of side-stepped the issue that way:

https://github.com/gavinbunney/terraform-provider-kubectl/issues/61#issuecomment-2046499793

Hope it helps people here