pulumi / pulumi-kubernetes

A Pulumi resource provider for Kubernetes to manage API resources and workloads in running clusters
https://www.pulumi.com/docs/reference/clouds/kubernetes/
Apache License 2.0
398 stars 113 forks source link

Support a stateless render YAML workflow #1468

Open seivan opened 3 years ago

seivan commented 3 years ago

In order to deploy a local development version of the entire system, we use to run something like docker stack deploy --compose-file /path/to/docker-compose.yml but eventually just manually translated them all to regular K8 resources (yaml files).

But the issue, our production yaml files are defined in Typescript and generated by Pulumi. It would be neat to have a different local stack that only contains K8 resources in typescript pointed towards a specific K8 context (that's local) instead of having to juggle two different ways of doing that.

Similar to what was asked in https://github.com/pulumi/pulumi-cloud/issues/12

On a side note, the TypeScript to K8 resources is magnificent and should be its own damn tool :)

Edit: I found this, but it's unclear if it runs locally, or on Pulumi infra and pollutes the stores.

leezen commented 3 years ago

@seivan Just to clarify, are you trying to generate a docker-compose specification from a Pulumi program? If you're using k8s locally, is there a reason it's not sufficient to specify a different provider that points at your local instance?

seivan commented 3 years ago

@leezen

Just to clarify, are you trying to generate a docker-compose specification from a Pulumi program?

Not exactly, I am mentioning docker-compose only because it's an approach for deploying multiple containers at once. Kubernetes has a way of translating that into whatever it needs to make it work, but you could also just write all the deployment/services yourself, which we did. Just a little shot history of the whats and whys

If you're using k8s locally, is there a reason it's not sufficient to specify a different provider that points at your local instance?

Sorry, I probably was unclear, you actually already support what I am asking for which is, just generating the yaml files locally so I can deploy manually using kubectl apply

But I am wondering if this runs locally, or if it updates some state/store on your backend? The reason I ask is because the documentation mentions

The rendered manifests are kept in sync with changes to the program on each update.

This is the part, I am concerned over, I don't need it to maintain changes on each update as its run locally. I just need it to generate the yaml files each time I run it, since it's just for local dev environment.

lblackstone commented 3 years ago

But I am wondering if this runs locally, or if it updates some state/store on your backend?

Yes, Pulumi does maintain the state in the selected backend; our Pulumi Service by default, or using a local backend if you prefer. This is because these resources are handled as part of the normal update lifecycle. The Pulumi engine doesn't treat them any differently than other resources. The only difference is in the k8s provider, which conditionally renders them to disk if the flag is set.

generating the yaml files locally so I can deploy manually using kubectl apply

I am curious why you've chosen to use kubectl apply rather than using the provider to deploy the resources? I know some users are rendering the files and also using our YAML support to deploy them

seivan commented 3 years ago

Yes, Pulumi does maintain the state in the selected backend; our Pulumi Service by default, or using a local backend if you prefer. This is because these resources are handled as part of the normal update lifecycle. The Pulumi engine doesn't treat them any differently than other resources. The only difference is in the k8s provider, which conditionally renders them to disk if the flag is set.

But if it renders them locally on disk, then what's the point of maintaining a state for that type of resource?

I am curious why you've chosen to use kubectl apply rather than using the provider to deploy the resources? I know some users are rendering the files and also using our YAML support to deploy them

Because that's all I need really and that's what you did in your example.

There is no reason to store this state on Pulumi, there is no S3, Cloudfront, RDS, or Route53 or anything stateful.

What I really want is just static typing and one language, since our production is using Pulimi to deploy to production, but we use other stateful AWS resources which justifies the Pulimi service being involved

I don't need Pulumi service for local deployment or maintaining a state for them, just the Pulimi TS-API for generating YAML files so we can remain in the same world.

EvanBoyle commented 3 years ago

@seivan I (engineer at Pulumi) am actually working on a project that you might find interesting. Would you be interested in a short demo and sharing some of your experiences with the product? https://calendly.com/pulumi-labs/60-minutes

lblackstone commented 3 years ago

But if it renders them locally on disk, then what's the point of maintaining a state for that type of resource?

In theory, there's no need to maintain the state for rendered resources, but practically speaking, that would require deep changes in the provider and possibly the engine as well. Part of the motivation for implementing it in this way was to allow integration with other resources, e.g., if you need an IP address from a cloud load balancer. This way, property values are resolved correctly before the manifests are rendered.

It's not quite what you're asking, but I think you could set up a throwaway local backend to approximate a stateless rendering workflow.

reith commented 1 year ago

@EvanBoyle

I know this issue is a little bit old, but just curious, do you have a solution now?

farvour commented 1 year ago

I also have a similar workflow to this where I'd like to leverage some sort of stateless flag for the k8s provider. We're using ArgoCD to deploy the diff'ed manifests, migrating some things from Helm/helmfile -> Pulumi. We want to use Pulumi for much of our other providers such as AWS.

Right now I have to use a local, disposable file backend during my ArgoCD CMP runs. It also makes the stack/config issue dicey since I have to always remember to init the stack on the ArgoCD plugin run in a way that is known so it doesn't try and change the secrets provider on the committed stack configuration.


    "init")
        pulumi_state_temp=$(mktemp -d)
        ${PULUMI_CMD} login file://$pulumi_state_temp
        ${PULUMI_CMD} stack init ${ARGOCD_ENV_PULUMI_STACK} --secrets-provider ${ARGOCD_ENV_PULUMI_SECRETS_PROVIDER} # Will create "new disposable" stack, but re-use existing `Pulumi.stacknamewhatever.yaml` file if it exists in the repo.
        ;;
    "generate")
# Generates the YAML into manifests, which can be loaded up
        ${PULUMI_CMD} update \
            --stack ${ARGOCD_ENV_PULUMI_STACK} \
            --diff \
            --suppress-outputs \
            --yes
        ;;
    *)
        echo "Invalid invocation; supported is init or generate!"
        exit 9
        ;;
esac```
sanmai-NL commented 1 year ago

One could ask themselves: why use Pulumi at all when you don't need the statefulness? You can also write normal source code using the SDKs of providers you use. Pulumi adds a lot of complexity.

seivan commented 1 year ago

@sanmai-NL Then you've fully misunderstood the initial ticket.

Let me rephrase it; we're writing manifest files for Deployments using Pulumis API in TypeScript, and want to re-use said manifest for local deploys for local deployment for faster feedback loop.

Maintaining two sets of manifest files, where one is loosely typed YAML (not Pulumis fault) is inviting discrepancies.

Is that less ambiguous?

sanmai-NL commented 1 year ago

What do you mean with ‘loosely typed YAML’? Are you referring to Kubernetes manifests with that?

If so, I stand by my comment. You are generating YAML using Pulumi? But why would you? You can also use Kubernetes' API and render manifests from the state you attain with that.

seivan commented 1 year ago

What do you mean with ‘loosely typed YAML’? Are you referring to Kubernetes manifests with that?

Yes.

If so, I stand by my comment. You are generating YAML using Pulumi? But why would you? You can also use Kubernetes' API and render manifests from the state you attain with that.

Yeah, that is the work around, but unnecessarily so. Pulumi already has the ability to translate their API to Manifest YAML format. Allow to output it without having to deploy and render via a deployment.

You shouldn't have to go through a full deployment in order to generate the manifests used by Pulumi, the code to translate from the TS API to YAML isn't dependent on a deployment.

Instead of doing the work around (unnecessary) or maintaining a separate YAML (foot gun)

TL;DR Look, I've already written the deployment/replicaset/pod etc in TS. Why do I need to repeat that in YAML or call K8 API on a remote to generate it for local dev deployments in MiniK8 or whatever? The translation layer happens on call, not on depoyment, but forces deployment for the workaround you suggested.

sanmai-NL commented 1 year ago

Please note that I'm referring to a future situation where you'd have dropped that Pulumi TS source code and interface directly with the Kubernetes API or whatever provider's API. Whether that API supports dry-run to avoid actual deployment determines whether Pulumi does, not the other way around. I'm suggesting this since you aren't using Pulumi for its main advantage, state tracking.

reith commented 1 year ago

Please note that I'm referring to a future situation where you'd have dropped that Pulumi TS source code and interface directly with the Kubernetes API or whatever provider's API.

What's your point? Does that mean since at some point you might drop using tool X to facilitate Y, don't use X? Even if at some point someone decides to change manifests manually, he still can use the manifests generated by Pulumi for a starting point and do the manual modifications on top of that.

Whether that API supports dry-run to avoid actual deployment determines whether Pulumi does, not the other way around.

The Pulumi blog post OP shared in this post, which convinced me to use Pulumi to make the manifests, doesn't mention this. Instead it starts by:

Stop writing Kubernetes YAML by hand, and start using the power of familiar programming languages! Pulumi can generate Kubernetes manifests that easily integrate into existing CI/CD workflows.

I believe I also had seen posts by Pulumi team members in Hackernews suggesting the same workflow. What you regard as complex, unnecessary or ... is actually advertised by Pulumi.

I'm suggesting this since you aren't using Pulumi for its main advantage, state tracking.

Main advantage or not, that state can derived from the manifests:

But if it renders them locally on disk, then what's the point of maintaining a state for that type of resource?

In theory, there's no need to maintain the state for rendered resources, but practically speaking, that would require deep changes in the provider and possibly the engine as well.

farvour commented 1 year ago

@sanmai-NL I understand that the state can be disposed, however, I see this as a tool quality of life improvement. It allows Pulumi to not leave litter around that needs to be cleaned up out of band for these workflows @reith is pointing out. I'm not sure what the complexities of having a provider ignore the actual state check/persisting is, but if it's not overly complicated, having the flag would be very helpful for rendered YAML manifests. It can then be turned on or off based on whether one desires to have Pulumi interface with k8s, or the manifests always regenerated to be fed into a 3rd party tool, such as ArgoCD for GitOps. In fact, orgs that are doing GitOps would want to have the manifests generated & committed regularly and tracking the state would not make much sense.

I disagree with your comment that tracking state is Pulumi's only usable feature. That may be true for most of the providers, but the k8s provider is a unique one, for the very use-cases being described in this thread.

If it's a QoL improvement that isn't much effort and ditches the "amnesic state" workaround posited above, then it seems worthy of implementation for those workflows. Why steer someone to use a tool other than pulumi for k8s manifest generation if the gap here is simply to have rendered YAML k8s provider resources be stateless [not persisted and forgotten immediately after run].

If one wanted to use helmfile template, or kustomize, or Kapitan, would.. but if someone wants to use Pulumi, and Levi's article convinces me, and I think it's a useful QoL:

https://www.pulumi.com/blog/kubernetes-yaml-generation/

As Levi pointed out, it may be more difficult to implement than what we're describing here and do not want to trivialize it that much. The workaround works, it just feels unclean.