hashicorp / terraform-provider-kubernetes

Terraform Kubernetes provider
https://www.terraform.io/docs/providers/kubernetes/
Mozilla Public License 2.0
1.59k stars 970 forks source link

"Warning: Attribute not found in schema" after updating CRD #2126

Open gabegorelick opened 1 year ago

gabegorelick commented 1 year ago

Terraform version, Kubernetes provider version and Kubernetes version

Terraform version: 1.4.4
Kubernetes Provider version: 2.20.0
Kubernetes version: 1.23

Terraform configuration

I'm using kubernetes_manifest to define a custom resource for which the CRD is already installed. The CRD in question is from https://github.com/external-secrets/external-secrets/blob/main/deploy/crds/bundle.yaml.

resource "kubernetes_manifest" "secret_store" {
  manifest = {
    apiVersion = "external-secrets.io/v1beta1"
    kind = "ClusterSecretStore"
    metadata = {
      name: "foo"
    }
    spec = {
      provider = {
        aws = {
           # some AWS config here that's not relevant
        }
      }
    }
  }
}

Question

After updating to a newer version of the CRD, terraform plan continuously warns about attributes not found in schema, for example, "Unable to find schema type for attribute: spec.provider.alibaba.endpoint." It appears this is an attribute that used to exist in the CRD, but no longer does (https://github.com/external-secrets/external-secrets/commit/59f5759106c51dc84a8344fb2a89323c50432555#diff-39388de29a1b8f0becdcbbb94fc710b76b21c4fb71e769d523d6bbede3f1feb7L38).

Inspecting the state file, the object attribute of the kubernetes_manifest.secret_store resource has defaults populated for a number of fields, including the alibaba one that Terraform is complaining about:

{
  // ...
  "alibaba": {
    "auth": {
      "secretRef": {
        "accessKeyIDSecretRef": {
          "key": null,
          "name": null,
          "namespace": null
        },
        "accessKeySecretSecretRef": {
          "key": null,
          "name": null,
          "namespace": null
        }
      }
    },
    "endpoint": null, // This is the attribute that was removed from the CRD
    "regionID": null
  },
}

To be clear, we do not set these alibaba fields. Terraform has populated them, presumably when the fields were defined in the CRD, and is now complaining about the presence of fields that are no longer in the CRD.

So my question is, how do I fix this?

Potential solutions:

Either way, it seems like there should be an easier solution for this.

ep4sh commented 1 year ago

Facing the same issue after upgrading from 1.22 to 1.25 Kubernetes. We have a lot of applications in Argocd and it seems most of them are broken, if we deploy them via terraform. But no such effect in case we deploy via pure k8s YAML manifest:

╷
│ Warning: Attribute not found in schema
│
│   with module.cloud.module.argocd.kubernetes_manifest.appproject_argocd_autosync,
│   on .terraform/modules/cloud.argocd/settings.tf line 1, in resource "kubernetes_manifest" "appproject_argocd_autosync":
│    1: resource "kubernetes_manifest" "appproject_argocd_autosync" {
│
│ Unable to find schema type for attribute:
│ metadata.clusterName
│
│ (and 49 more similar warnings elsewhere)
╵
╷
│ Error: Failed to transform List value into Tuple of different length
│
│ Error: %!s(<nil>)
│ ...at attribute:
│ spec.sources
╵
╷
│ Error: Failed to transform Object element into Object element type
│
│ Error (see above) at attribute:
│ spec.sources
╵
╷
│ Error: Failed to transform Object element into Object element type
│
│ Error (see above) at attribute:
│ spec

....
....
....

Terraform version: 1.5.5 Kubernetes Provider version: 2.22.0 / 2.23.0 Kubernetes version: 1.25

blockguardian commented 1 year ago

Any updates on this ?Facing the same issue.

Unable to find schema type for attribute:
│ metadata.clusterName
BlakeB415 commented 1 year ago

Same issue. Did anyone figure out how to get around this?

joaocc commented 1 year ago

It seems some people simply use kubectl_manifest and disable the yaml schema validation

Example from a fine module for cert-manager: https://github.com/terraform-iaac/terraform-kubernetes-cert-manager/blob/9082a84de3969780c7acfe91f88601349028be33/main.tf#L42

kubectl_manifest" "cluster_issuer" {
  count = var.cluster_issuer_create ? 1 : 0

  validate_schema = false
...

However, having to implement these workaround kind of defeats some of the benefits of terraform, making code unnecessarily complex... It would be great if kubernetes_manifest could allow for something similar in the provider resources themselves (one can ask nicely :))

cippaciong commented 1 year ago

I had the same issue while updating the cert-manager in a terraform module that also deployed two ClusterIssuers.

The underlying problem I think it's the removal of metadata.clusterName in recent Kubernetes versions: https://github.com/kubernetes/kubernetes/pull/108717

My workaround has been to remove the two ClusterIssuers from terraform state without deleting the resource, and importing them back. Here's what I did (I use terragrunt)

# Show existing resources in tf state
terragrunt state list
  # helm_release.cert_manager
  # kubernetes_manifest.prod_issuer[0]
  # kubernetes_manifest.staging_issuer[0]
  # kubernetes_namespace.cert_manager
  # kubernetes_secret.azuredns_credentials[0]

# Delete cluster issuers from tf state
terragrunt state rm kubernetes_manifest.staging_issuer[0]
terragrunt state rm kubernetes_manifest.prod_issuer[0]

# Import them back
terragrunt import kubernetes_manifest.staging_issuer[0] "apiVersion=cert-manager.io/v1,kind=ClusterIssuer,name=letsencrypt-staging"
terragrunt import kubernetes_manifest.prod_issuer[0] "apiVersion=cert-manager.io/v1,kind=ClusterIssuer,name=letsencrypt-prod"

# Final apply
terragrunt apply

After this operation the problem was gone

kstevensonnv commented 12 months ago

kubectl_manifest

This is abandoned, I would not rely on it. No updates in two years and the author isn't merging pull requests.

I don't know how this hasn't received any attention so far. It is entirely regular and mandatory to upgrade CRDs. If that change breaks managing resources with Terraform it's a significant issue with the provider.

Having to remove those resources from the state and import them is ridiculous.

MonicaMagoniCom commented 11 months ago

I have the same issue with a manifest containing a SealedSecret. I got as error: │ Unable to find schema type for attribute: │ object.spec.template.metadata.creationTimestamp

maxsxu commented 9 months ago

Issue still exist with latest kubernetes provider 2.25.2

Unable to find schema type for attribute:
│ metadata.clusterName
christophercutajar commented 8 months ago

Issue still exist with kubernetes provider 2.26.0

Unable to find schema type for attribute:
metadata.clusterName
grzleadams commented 3 months ago

Issue still exist with kubernetes provider 2.26.0

Unable to find schema type for attribute:
metadata.clusterName

Still a problem in 2.31.0...