Closed jonerer closed 3 years ago
Btw I understand that the version number needs to be changed when the chart itself changes, but not when the values are updated. It does indeed make sense that the chart itself is locked with the version number (we typically don't really bump it internally when we make a change, since we're used to a monorepo where "latest" is always the only version. But we could sort that out I guess).
The values you're referring to are really part of the chart, which is evidenced by the fact that when you set valuesFiles
it omits the original values.yaml
from inside of the chart (unless we repeat it in valuesFiles
to ensure that it gets expressed.)
The chart versioning covers both the template and default values, so similarly changing a file in the values.yaml
without updating the Chart.yaml is not a valid chart update for Helm Controller. Setting valuesFiles
overrides the default values of the chart with some other file (that must be from within the chart's directory structure, so again, like the original default values.yaml this is actually a part of the chart.)
In case it helps, you can also use the content of a configmap as the whole values
map, to merge with the default values:
https://fluxcd.io/docs/components/helm/helmreleases/#values-overrides
If you follow the first example and leave targetPath
unset (as in this example, from the link above):
valuesFrom:
- kind: ConfigMap
name: prod-env-values
valuesKey: values-prod.yaml
... then it's assumed the values you want to merge in are a map containing all of the values to merge from the configmap, data."values-prod.yaml"
– and they come from an external Kubernetes object so they are treated as input by HelmRelease, which means that no Chart.yaml version bump is needed to trigger a release in order to enjoy them. You just have to place that configmap somewhere that it is synced by Flux at the same time as changes to your HelmRelease, so they will be picked up by Helm Controller together after any git push.
The other thing to know is that while Helm Controller subscribes to the HelmRelease for updates, there is no subscription to the ConfigMap as far as I know, so you will have to make sure if your HelmRelease is set to a long reconcile spec.interval
that it gets reconciled in time some other way (perhaps manually, or you can set the interval shorter.)
This targetPath
-less configuration might be the functionality that you're missing from Helm Operator, I think this is behaving more similar to what one could have done with chartFileRef
in the old Helm Operator schema.
Thanks for the input!
Yeah my CI pipeline could output a ConfigMap that gets picked up by flux and referenced as a ConfigMap
in valuesFrom. That's a bit trickier but workable.
The big thing that I'm really trying to achieve is the ability to use the helm command line tools along flux for debuggability. If I could locally use helm
to render out what's going to happen on the cluster, that would help greatly with developing and debugging charts.
To exemplify: if I run the helm chart locally, with for instance helm template my_helm_release . -f values_prod_tag.yaml
it shows me what it would render, and if I do helm diff upgrade my_helm_release . -f some_other_values_file.yaml
it would show me what my changes to the values or the chart would do on the cluster.
If I could just point out my helm
values file somehow in the HelmRelease object, that would be one way to do that. But if I put my values in a ConfigMap or a Secret, or inline them in the HelmRelease values-section, then it doesn't really allow me to use the helm
cli tool to inspect the rendering process.
Is there some "Flux"y way to deal with that kind of scenario? How do people usually deal with debuggability and predictability with flux? I'm pretty new here so sorry if I overlooked something obvious.
That's a good point and that's definitely a known gap in our usability.
I think I heard it said that we would have a replacement for kubectl diff
soon, dependent on the server-side apply feature that we're getting from Kustomize Controller v1beta2. I do not know how this is related to Helm, as I am pretty sure the Helm Controller actually runs its output through Kustomize since a recent release, it stands to reason that perhaps Helm Controller will get this feature soon too? That could be wishful thinking.
In the mean time you can extract the values from your configmap and render them in the helm client like this:
yq eval .data values-cm.yaml | helm template jenkins jenkins/jenkins -f -
I used helm get values
to get an idea of what all values I was sending in total to my large complex helm chart, then tested this with a really massive configmap and I got acceptable looking output from helm template
for my purposes, but when I ran it through helm diff
it was clear that the diff was unhelpfully respectful of whitespace and I got some diffs that were basically nonsense, like this one:
yq eval .data values-cm.yaml | helm diff upgrade jenkins jenkins/jenkins -f -|less -R
volumes:
- - emptyDir: {}
- name: plugins
- - configMap:
+ - name: plugins
+ emptyDir: {}
+ - name: jenkins-config
+ configMap:
name: jenkins
- name: jenkins-config
That seems to be a problem in Helm Diff (and thanks for pointing at that, I wasn't aware of this plugin) – I see it complaining about some things that will be removed, I don't know if it's because of configmap generators or another issue. I also see a bundle of labels that I know Flux uses for accounting, that is missing because Helm Template won't emit them, and so it reflects in the diff of every resource. This is hard to read and not an ideal suggestion to solve this problem for users.
Suffice it to say that while this diff problem may be in the works to solve soon with Flux Kustomizations thanks to advances in Kustomize controller, I am afraid it will be a while longer before it can be solved in Helm Controller, if it even can be solved at all. The semantics of the 3-way merge in Helm added together with things like configmap generators and postRender instructions make some diffs very hard to reason about. It is on the order of twice as difficult to answer this question for Helm.
Yeah I also noticed that helm diff
produces quite a lot of non-meaningful output where the two versions are functionally equivalent but not textually equivalent.
It should be noted that AFAIK helm diff
compares the text locally rendered with the text saved in the previous helm release in that release's Secret
, like sh.helm.release.v1.fleet-jenkins-prod.v174
. It doesn't compare to the actual resources or do drift analysis or so. So it seems that the Helm Controller alters the provided helm text into something functionally equivalent but with a different format. And I'm not sure how relevant any kubectl diff
or so would be, since it would probably compare with the actual resources, not the saved helm release output from the Secret
.
For instance if your k8s resource uses a list with
# flow style
[ foo bar, baz ]
Then sometimes it seems like the helm controller will instead apply something like
# block style
- foo bar
- baz
Or vice versa. So yeah, the helm controller altering the source text is a bit unfortunate. I'm not sure why this happens, but it seems like it's loaded into some internal format and normalized (via Kustomize perhaps?), which causes the applied helm release to differ from the rendered text.
If you would prefer, that could be a separate github issue (or perhaps something that the helm diff
project has to deal with -- perhaps a "normalize" flag or something).
As I said, there might be some Flux'y way to handle diffing, but it seems like if we want to support the typical helm
development flow, then there are are a few things to consider
FYI I made this issue about the diffing: https://github.com/databus23/helm-diff/issues/310
We would have better luck if we could feed both versions through a YAML linting/sorting function before comparing them.
I'm afraid it probably isn't Helm Controller that has munged up the YAML, but the cluster's etcd data store which likely does not treat maps as ordered data structures. Also possible I'm wrong and Helm Controller actually did munge the data up before writing it into the helm revision secret, (and say that's by design maybe) so either way the effect is we get these unordered data structures back out with some key values in a different order than what Helm template had originally put into the cluster.
If this is truly a divergence from how Helm upstream behaves, and it impacts how Helm diff can be used I would generally tend to think of that kind of thing as a bug in Helm Controller, but I would almost rather consider it a bug in helm diff
Moreover I think there's a chance it would be solved rapidly on the helm diff
side where I think it might require to rework the way Helm Controller interfaces with its rendered YAML templates to resolve the string order inconsistency, perhaps less likely to get attention soon.
I generally wouldn't expect any YAML that gets fed into the cluster and read back out to preserve their original formatting. It's interesting that helm diff
does though, and it can apparently get away with that until it comes up against Helm Controller!
The latest source- and helm-controller releases offer support for defining a spec.chart.spec.reconcileStrategy
, which if set to Revision
, would take file changes in a GitRepository
into account: https://fluxcd.io/docs/guides/helmreleases/#release-when-a-source-revision-changes-for-git-and-cloud-storage
That sounds really promising @hiddeco ! I guess we can close this one out. Any ideas when its coming into a flux release?
This is available with flux
>=0.18.0
, but waiting on the v0.12.1
to end up in a next CLI patch release (or manually bumping it) is advisable, due to #333.
Originally posted as https://github.com/fluxcd/flux/issues/3556
Thanks for a great piece of software! I'm having a blast with it.
This issue is not to report on a bug with the runtime implementation, but with the semantics of HelmRelease CRDs.
I have a HelmRelease where I load the values from normal helm value files. According to https://fluxcd.io/docs/components/helm/helmreleases/ the way to do that is to put them in the Spec, like so:
This works perfectly file, loading in the values.yaml file and adding values_prod_tag.yaml on top. But it doesn't update the helm release on reconciliation if either of the values files are changed. Reconciliation will only pick up the changes if I bump the "version" number on the chart.
The reason why I'm using normal helm values.yaml-files instead a
ConfigMap
orSecret
definition is twofold:Updating a values file referenced by a HelmRelease should re-render the helm chart and update the installed helm release upon reconciliation.
What I'm looking for is probably what was referred to as
valuesFrom: chartFileRef
in the Helm Operator: https://fluxcd.io/docs/migration/helm-operator-migration/#chart-file-references are there any plans to implement this for the Helm Controller? It would be awesome to be able to ship values into the rendering from a normal helm values file, rather than aConfigMap
orSecret
definition fileAlternatively, if there is some other more "Flux"-y way I would be very interested to learn!
➜ ~ flux check ► checking prerequisites ✔ kubectl 1.22.0 >=1.18.0-0 ✔ Kubernetes 1.20.9 >=1.16.0-0 ► checking controllers ✔ helm-controller: deployment ready ► ghcr.io/fluxcd/helm-controller:v0.11.2 ✔ kustomize-controller: deployment ready ► ghcr.io/fluxcd/kustomize-controller:v0.14.1 ✔ notification-controller: deployment ready ► ghcr.io/fluxcd/notification-controller:v0.16.0 ✔ source-controller: deployment ready ► ghcr.io/fluxcd/source-controller:v0.15.4 ✔ all checks passed