roboll / helmfile

Deploy Kubernetes Helm Charts
MIT License
4.05k stars 564 forks source link

feat: add '--output-dir' option to other options (apply/sync etc) write generated YAML output to disk #751

Open bitsofinfo opened 5 years ago

bitsofinfo commented 5 years ago

The new --output-dir option added w/ #629 is awesome.

In my use case, I'd really like to optionally capture the generated YAML output from all helmfile commands (i.e. apply/sync etc) and be able to flush it to disk, in addition to the normal behavior of actually invoking helm and sending to kubernetes. I'd like to to this to be able to always have records of releases generated and sent off to k8s.

This way we could just run a single command instead of both template --output-dir followed by apply

@lwolf @olivierboudet

lwolf commented 5 years ago

@bitsofinfo is my assumption correct, that you're trying to get some sort of gitOps style deployment. When basically every state of the cluster is stored in repo or somewhere and could be applied or rolled back?

bitsofinfo commented 5 years ago

ideally yes, my helmfiles generate a lot of releases based on other simplified input. I just like capturing as much of whats going on as possible, and being able to capture all yaml generated and be able to store that separately anywhere else I want would be great.

I could def do that now w/ template + the new --output-dir argument, but it just requires an additional step.

lwolf commented 5 years ago

I see, I had similar requirements when I suggested that --output-dir to cover some the issues I had, but only because original helm has it. I'm currently using mix of makefile rules + helmfile labels to render each release separately, commit it to git repo. But I don't use helmfile to actually apply state any more. I'm using dedicated gitOps operator.

Back to the original issue: To be honest, I don't think (hope helmfile maintainers correct me here if I'm wrong), that --output-dir could be easily added to apply/sync commands or the complexity is justified. Effectively it will require doing those 2 steps that you need in the code.

bitsofinfo commented 5 years ago

you using flux?

lwolf commented 5 years ago

no, I chose argo-cd

mumoshu commented 5 years ago

@bitsofinfo Hey! I'd recommend helmfile template results committed to Git combined with flux or argo-cd for your use-case.

The biggest gotcha I'm aware of so far is that you can't use helmfile test and other useful helm commands that depends on helm releases anymore.

But https://github.com/mumoshu/helm-x could give you upgrade --install --include-release-(configmap|secret) that allows you include helm release configmaps/secrets into "helmfile template" results, which enables helmfile test and so on.

mumoshu commented 5 years ago

@bitsofinfo @lwolf Btw how are you folks handling secrets in GitOps?

Are you using sealed secrets, aws-secret-operatpr? Or are you by any chance committing raw secret YAMLs generated by helmfile template into Git?

bitsofinfo commented 5 years ago

I'm not currently even started on any of the gitops stuff yet. I just currently have a need to capture the yaml output for auditing purposes and would ideally like to not have to invoke things 2x (or use another tool at yet)

lwolf commented 5 years ago

@mumoshu custom internal operator, similar to the sealed secrets. Everything is baremetal, so no aws-*

mumoshu commented 5 years ago

@bitsofinfo

or use another tool at yet

You do use a CI system, right? So probably your goal for this issue is to extend helmfile to achieve GitOps without introducing an another CD system like flux?

Also back to your original issue:

I'd like to to this to be able to always have records of releases generated and sent off to k8s.

What's your goal for this? Auditing?

bitsofinfo commented 5 years ago

Yes the goal for this kind of thing is auditing and just checking in the YAML output to git so we can quickly refer to it should a need arise without having to go digging into the cluster. We always have a copy of the literal YAML generated and applied to the cluster. I currently have no defined or set in stone process, I just see this requirement coming from the higher ups.

Right now I can get this from template, all I'm doing is wanting to avoid running helmfile 2 times.

I classify this as a nice to have.

For me #752 is more important (for me anyways) as right now I have to write a custom parser that captures all the stdout from helmfile --log-level debug .... template command