pulumi / pulumi-kubernetes

A Pulumi resource provider for Kubernetes to manage API resources and workloads in running clusters
https://www.pulumi.com/docs/reference/clouds/kubernetes/
Apache License 2.0
407 stars 117 forks source link

Terraform-like declarative, strict desired-state behaviour #3262

Open zsolt-p opened 1 week ago

zsolt-p commented 1 week ago

Hello,

I am evaluating moving some of my Terraform-managed projects to Pulumi. In my testing I exercise certain scenarios in which Terraform behaves in a certain way, that I would like to maintain. One such example is Terraform's ability and default behaviour in enforcing complete 1:1 agreement of what's defined in code vs the real state of Kubernetes state: any manual edits on k8s objects get marked as adrift the next time I run terraform plan or apply. This includes any change on for example k8s Deployments, including the ones which were not originally created by Terraform, nor have they ever been managed by it. I'm looking to understand if the same is possible and if yes, how, using Pulumi.

Example:

resource "kubernetes_deployment" "my-deployment" {
  metadata {
    name      = "my-deployment"
    namespace = "my-namespace"
  }

  spec {
    replicas = 1

    template {
      spec {
        container {
          name    = "my-deployment"
          image   = "my-deployment:12345"
          command = ["/my-command.sh"]

          env_from {
            config_map_ref {
              name     = "my-configmap"
              optional = false
            }
          }

          env_from {
            secret_ref {
              name = "my-secrets"
            }
          }

          env {
            name  = "LOG_LEVEL"
            value = "info"
          }
        }
      }
    }
  }
}
kubectl edit deployments/my-deployment -n my-namespace

and add a new environment variable just after LOG_LEVEL:

        - name: FOO
          value: bar

I would like to maintain this same behaviour using Pulumi, however, I have not been able to achieve this: defining and deploying the same Deployment using Pulumi TS, when I run pulumi up --refresh after having made the manual kubectl edit, it will not propose to remove FOO even if it isn't one that's defined in the TS code. This is different if I add FOO at the front of the Deployment env vars, since Pulumi at that point will recognise env[0] which it does manage, has a key and value that's different from what's in TS. In this case, it does revert the manual change. What I'm looking for, though, is a way to tell Pulumi to make the TS code match the k8s state 100%, no exceptions.

Things I have looked into (without success) :

Thanks in advance, and sorry if this is documented somewhere

EronWright commented 1 week ago

Thanks @zsolt-p for the report.

Kubernetes is designed for multi-party authoring of objects, e.g. where one party (or "manager") authors the bulk of the spec, another party (a controller) authors the status block, and another party (an auto-scaler) authors the replicas field. The notion of one party having total control of the object is somewhat counter to its design.

The schemas of the Kubernetes resource types contain information about how to merge the intentions of different parties. For example, the pod's env vars are merged across all parties, and the ownership is tracked by the server, so that each party gets "replace" semantics. If, for example, your program was setting FOO, then later you removed FOO from your program, it would be cleared out, while any vars set by other parties would survive.

When it comes to drift detection, the scope is with respect to the fields that your program owns (by setting an intentional value). Could you outline a specific case of drift that you'd like Pulumi to remediate?