gavinbunney / terraform-provider-kubectl

Terraform provider to handle raw kubernetes manifest yaml files
https://registry.terraform.io/providers/gavinbunney/kubectl
Mozilla Public License 2.0
630 stars 108 forks source link

Modifications to existing resources silently fail to apply #102

Open jbg opened 3 years ago

jbg commented 3 years ago

I'm using this provider to manage Tekton trigger templates, like this:

resource "kubectl_manifest" "triggertemplate" {
  yaml_body = <<-EOT
    apiVersion: triggers.tekton.dev/v1alpha1
    kind: TriggerTemplate
    metadata:
      name: foo
      namespace: ci
    spec:
      resourceTemplates:
      - apiVersion: tekton.dev/v1beta1
        kind: PipelineRun
        metadata:
          generateName: foo-
        spec:
          serviceAccountName: ci-foo
          pipelineRef:
            name: foo
          workspaces:
          - name: source
            volumeClaimTemplate:
              spec:
                accessModes: ["ReadWriteMany"]
                storageClassName: ci-workspaces
                resources:
                  requests:
                    storage: 1Gi
  EOT
}

This resource applies fine the first time and creates the TriggerTemplate in the cluster. However, if I then modify the yaml_body and apply again, Terraform reports that kubectl_manifest.triggertemplate will be updated in-place, and successfully applies the change, but nothing actually changes in-cluster. If I then run another plan, the resource is not reported as needing modification, even if I refresh the state of the resource.

gavinbunney commented 3 years ago

Hi @jbg can you provide a few more details around what field you are changing and what's not being updated? There are some kubernetes attributes (like annotations) which are never removed by kubectl apply, so they remain even if you take them out of your hcl.

jbg commented 3 years ago

I haven't tested changing labels and annotations, but changing anything inside spec has no effect (resource shows a change when I run plan, and successfully applies the change when I run apply, but nothing changes in-cluster).

This behaviour actually happens with almost all Tekton-related resources. I've now set force_new = true on all of them, which works around the problem.

I suspect it might have something to do with Tekton's rubbish CRDs; they use x-kubernetes-preserve-unknown-fields: true to just avoid needing to provide any schema at all. Here's the CRD for the Pipeline resource, which also causes this bug, and which is typical of their CRDs:

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: pipelines.tekton.dev
  labels:
    app.kubernetes.io/instance: default
    app.kubernetes.io/part-of: tekton-pipelines
    pipeline.tekton.dev/release: "v0.24.1"
    version: "v0.24.1"
spec:
  group: tekton.dev
  versions:
    - name: v1beta1
      served: true
      storage: true
      # Opt into the status subresource so metadata.generation
      # starts to increment
      subresources:
        status: {}
      schema:
        openAPIV3Schema:
          type: object
          x-kubernetes-preserve-unknown-fields: true
  names:
    kind: Pipeline
    plural: pipelines
    categories:
      - tekton
      - tekton-pipelines
  scope: Namespaced
  conversion:
    strategy: Webhook
    webhook:
      conversionReviewVersions: ["v1beta1"]
      clientConfig:
        service:
          name: tekton-pipelines-webhook
          namespace: tekton-pipelines

As you can see, there's no schema at all. Maybe this is interfering with the way this provider patches resources?