Open hiddeco opened 1 year ago
Thanks for so smart implementation of this "so desired" feature ;-) To be sure I fully understood it from https://fluxcd.io/flux/components/helm/helmreleases/#drift-detection description
When a HelmRelease is in-sync with the Helm release object in the storage, the controller will compare the manifests from the Helm storage with the current state of the cluster using a server-side dry-run apply. If this comparison detects a drift (either due resource being created or modified during the dry-run), the controller will perform an upgrade for the release, restoring the desired state
So helm-controller, for each reconciled HelmRelease
object, will periodically (driven by .spec.interval
setting):
If any drift is detected -> a new Helm upgrade procedure is performed using the referred Helm chart.
Did I get it right? If so, some few additional questions:
.spec.upgrade
settings in the involved HelmRelease
object (so, as an example, triggering again the pre/post-upgrade hooks if .spec.upgrade.disableHooks
not set or set to false
), right?When a HelmRelease is in-sync with the Helm release object in the storage
... just to confirm it).Thanks in advance.
Compare the result with the current "content/intent" of each of the API objects for those manifests used to run the dry-run SSA step (including also the detection of missing API objects)
We look at if the object was created or changed during the dry-run apply. If it changed, we look at the comparison of "old" (the resource as present on the cluster) and "new" (the resource after the resource as defined in the release has been dry-run applied) to provide a diff in the logs. But the "created" or "changed" is already sufficient to determine if we should trigger a new release, as it means the resource in the cluster has either been deleted or mutated.
[..] as an example, triggering again the pre/post-upgrade hooks if
.spec.upgrade.disableHooks
not set or set tofalse
This is correct.
Is the step 1 above ignoring the manifests annotated as pre/post-install/upgrade/rollback/test hooks?
This is correct, as hooks are not part of the manifest blob in a Helm release storage object, but stored separately.
All this only happens if the latest version of the involved Helm release succeed (i.e. we did not get any Helm error/failure in the previous Helm install/upgrade/rollback procedure for the involved HelmRelease object), right?
This is correct.
Hi, I've read the prometheus example around drift-detection. If I understand well, PrometheusRules are excluded from drift-detection. So, when someone deletes/modifies one of the mix-in rules, the change is not reverted by flux ?
I tried this out for a little bit and it seems to be working, correctly recreated a deployment I deleted 👍️
If I understand well, PrometheusRules are excluded from drift-detection.
This is correct, as PrometheusRules
objects are annotated by a mutating webhook which (at present) does not get triggered by a dry-run Server Side Apply, causing the controller to detect spurious drift:
PrometheusRule/monitoring/kube-prometheus-stack-etcd diff:
.metadata.annotations.prometheus-operator-validated
+true
PrometheusRule/monitoring/kube-prometheus-stack-general.rules diff:
.metadata.annotations.prometheus-operator-validated
+true
...
We are looking into options to e.g. define a YAML path to ignore to circumvent this issue, rather than ignoring the resources in full.
If I understand well, PrometheusRules are excluded from drift-detection.
This is correct, as
PrometheusRules
objects are annotated by a mutating webhook which (at present) does not get triggered by a dry-run Server Side Apply, causing the controller to detect spurious drift:PrometheusRule/monitoring/kube-prometheus-stack-etcd diff: .metadata.annotations.prometheus-operator-validated +true PrometheusRule/monitoring/kube-prometheus-stack-general.rules diff: .metadata.annotations.prometheus-operator-validated +true ...
We are looking into options to e.g. define a YAML path to ignore to circumvent this issue, rather than ignoring the resources in full.
Sounds really promising. Thank you.
Thanks for this feature!
@hiddeco We have two questions regarding drift detection:
flux diff helmrelease xyz
. Is this already planned?Would be great to have support for
flux diff helmrelease xyz
. Is this already planned?
This is indeed planned, but not expected to land before end Q2 / sometime in Q3.
Any plans to add new prometheus metrics so that we can have alerting on drift detection?
Can you please create a separate issue for this?
Would be great to have support for
flux diff helmrelease xyz
. Is this already planned?This is indeed planned, but not expected to land before end Q2 / sometime in Q3.
Is there an issue or something I could follow?
@benarnold42 I think this is the same issue in principle (flux build helmrelease
implies flux diff
as well):
What does this error mean?
ProgressingWithRetry Detecting drift for revision xxx with a timeout of 4m30s
Have updated the issue to reflect the changes since the v0.37.0
release, included in Flux v2.2.0
. For more information, refer to this blog post or the documentation.
Is it possible to set the default drift detection to be mode: enabled
or mode: warn
without just applying a global kustomize patch? If not that's definitely a feature we'd be interested in.
I think a better solution than controller global defaults is to make use of a policy agent which enforces certain default configurations through validation, an example of this are Kyverno policies (https://kyverno.io/docs/writing-policies/). This ensures you stick to a pure declarative model which can easily be reapplied elsewhere while functioning the same.
The reason the feature was previously available as a global configuration option was due to it being experimental, and us having questions around how we wanted to introduce it in the API.
Alright, I understand the reasoning. Thanks for the feedback.
I think a better solution than controller global defaults is to make use of a policy agent which enforces certain default configurations through validation, an example of this are Kyverno policies (kyverno.io/docs/writing-policies). This ensures you stick to a pure declarative model which can easily be reapplied elsewhere while functioning the same.
The reason the feature was previously available as a global configuration option was due to it being experimental, and us having questions around how we wanted to introduce it in the API.
I don't really understand this, isn't this way less declarative than just specifying a global default?
We'd also be interested in enabling this feature by default without having to touch every HelmRelease and definitely without having to write random mutating hooks in random policy agents.
Especially since a global default is much more portable than "you have to install another component you didn't want and depending on what policy agent you decide on you have to define this value differently. Oh, and don't forget that this doesn't apply to already existing resources, so, yeah, should've thought about this earlier"
Thanks for this cool feature :)
One question: I see that in both the debug logs and in the events, we print something like:
X/a changed (0 additions, 1 changes, 1 removals)
Y/B changed (0 additions, 1 changes, 0 removals)
I would be interested in what are the actual changes, additions and removals before changing the mode from "warn" to "enabled". Is there any way to do so? Thanks!
If you set -–log-level
to debug
, the controller will log the calculated JSON Patch for the object.
In the new year, we are planning to add a command to flux
to allow getting this information forward without having to resort to the logs.
If flux detects some kind of drift, e.g. I delete a resource, does flux execute the hooks? A bit of testing gave me the impression that the hooks won't be executed, in particular https://github.com/teutonet/teutonet-helm-charts/blob/main/charts/base-cluster/templates/flux/_create-authentication-key-secret-job.yaml
Seeing the secrets, or the helm history, no upgrade
has been run
Is the step 1 above ignoring the manifests annotated as pre/post-install/upgrade/rollback/test hooks?
This is correct, as hooks are not part of the manifest blob in a Helm release storage object, but stored separately.
Could this be implemented? We rely on hook for various dynamic things in our helm charts
Btw, the current implementation, with .spec.driftDetection.mode=disabled
being the default is actually a breaking change, as before I could flux reconcile
a HelmRelease to recreate missing resources and now this does nothing.
So creating a flux-global flag or just enabling this by default should be done.
as before I could flux reconcile a HelmRelease to recreate missing resources and now this does nothing.
Before helm-controller had no drift detection capability so it couldn't recreate missing resources.
I used flux suspend hr
followed by flux resume hr
to reconcile HelmRelease before
@Legion2 to achieve the same thing with Flux 2.2 do flux reconcile hr --force
.
@Legion2 to achieve the same thing with Flux 2.2 do
flux reconcile hr --force
.
Nice, but still, a global flag would be amazing, because I see no other good way of setting this globally. And setting this on all of my dozens of HelmReleases is quite annoying and error prone
+1 for global setting, this would be amazing, otherwize this is unusable
HelmReleases are applied on the cluster by Flux Kustomizations, adding a patch to enable drift detection for all ca be done with:
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
spec:
patches:
- patch: |
apiVersion: helm.toolkit.fluxcd.io/v2beta2
kind: HelmRelease
metadata:
name: all
spec:
driftDetection:
mode: enabled
target:
kind: HelmRelease
HelmReleases are applied on the cluster by Flux Kustomizations, adding a patch to enable drift detection for all ca be done with:
apiVersion: kustomize.toolkit.fluxcd.io/v1 kind: Kustomization spec: patches: - patch: | apiVersion: helm.toolkit.fluxcd.io/v2beta2 kind: HelmRelease metadata: name: all spec: driftDetection: mode: enabled target: kind: HelmRelease
Mh, that would be an annoying but semi workable solution, although I'm not happy with that
The drift detection is not enabled by default because it introduces risk. It would be a goal of the v2beta2 -> v2 update to make this safe enough to enable by default. Enabling this globally introduces hazards that you should be monitoring for, and mitigating. The hazards are not uncommon, they occur in highly popular charts like kube-prometheus-stack and other common places, like anything that uses an HPA.
https://fluxcd.io/flux/components/helm/helmreleases/#drift-detection
which gets a reference from:
https://fluxcd.io/flux/installation/configuration/helm-drift-detection/
covers the hazard, how to monitor for it, and the mitigation.
This is important, users who enable drift detection may have to read it all and take action to prevent Flux from wasting a lot of resources dry-cycling and upgrading forever. Not every configuration will be affected, but when we introduce global configuration it brings the risk that some combination of defaults we have not considered is not good, and in this case, we've advised many users to do retries: -1
– that could result in major issues in combination with the flag enabled. And it could be very difficult to diagnose because you don't only need to consider the content of the HelmRelease.spec, you also need to consider global configuration – which the users may not even know about or have access to flux-system
.
When we can condense all that into a single action that Flux can take on the user's behalf, then I think it will be possible to do global enable again. Ideally it's not even a feature flag, we just turn it on, since this is what users expect from the definition of GitOps. But as long as the users expectations may be violated by the hazards, certain HelmReleases are probably going to need some careful handling by an admin, and I would expect the global enable to generate more confused support inquiries than it resolves. We understand the current state isn't optimal, that's why it's v2beta2 and not yet v2.
Thanks for your patience as we make progress on this.
It would be a goal of the v2beta2 -> v2 update to make this safe enough to enable by default.
We can’t make this safe as it’s not in our control. There are lots of charts out there which mutate the state from jobs running as Helm hooks, not to mention HPA/VPA. People need to opt-in and set ignore rules.
Why then ArgoCD manages to enable this globally somehow and they are just fine with that? Every global chart out there has some instructions to make it work with Argo properly (the same thing with exclude rules) and you for some reason encourage people to waste their time building circumventions for poor design...
Why then ArgoCD manages to enable this globally somehow and they are just fine with that? Every global chart out there has some instructions to make it work with Argo properly (the same thing with exclude rules) and you for some reason encourage people to waste their time building circumventions for poor design...
Because ArgoCD doesn't really support Helm Charts, they only use it for simple templating and don't do hooks or API access. 😥
While flux does it correctly, allowing API calls, hooks, ... 🥳
Which is also why ArgoCD and flux can't really be compared 🤷
They do hooks, in a subtly incompatible way that will only bite you when you're getting deeply invested in Helm charts, you won't even notice it's a problem at first:
https://blog.teamhephy.info/blog/posts/tutorials/argocd-sentry-apps.html
It has been the story for years, ArgoCD maintainers recommend Flux if you need "perfect helm installations"
https://github.com/argoproj/argo-cd/issues/4288#issuecomment-693701155
This is one of the leading reasons why create Flamingo
We can’t make this safe as it’s not in our control.
I think the jury is out, there is some mechanism we can add to make it safe, but letting users opt in to the behavior and read the manual is a fine solution IMO. It may be better if the mechanism comes along that lets us short-circuit failures so we don't spin out of control. (Or the appropriate controls may have already made it in, I don't have a test that I can point at that says there is still any problem here...)
I agree that it's not safe to enable globally, if I haven't made that clear, and I won't recommend it because of the danger. This is because we use Helm SDK from upstream, and that's why you won't find this report under ArgoCD anywhere.
If you set
-–log-level
todebug
, the controller will log the calculated JSON Patch for the object.In the new year, we are planning to add a command to
flux
to allow getting this information forward without having to resort to the logs.
Hey @hiddeco,
Any update on this command?
Unfortunately, Weaveworks who used to pay me for my work on Flux went out of business.
Because of this, my contributions to Flux this year have been minimal — and I do not think I will be able to contribute the functionality we had planned back then this year. This does however not mean someone else would not be able to pick this up (and also doesn't implicate Flux is no longer maintained).
If someone were to pick this up, a couple of notes:
There is logic in internal/reconcile
which is the driving force behind the drift functionality, this logic can be consumed by a potential new command in https://github.com/fluxcd/flux2 to print a diff.
It should also be noted that the bits in internal/action
have been prepped in such a way that you can add e.g. --debug
options to for example mimic a helm upgrade --debug
using a HelmRelease
. Which is another (command) use case I had in mind while working on this.
I'm all for implementing the flux diff helmrelease
feature, but this would require a lot of my time and also for a 2nd maintainer to review all the code changes across Flux packages, controllers and CLI. My focus for this year is in other areas of the project, but if this feature request would come from a Flux Enterprise customer, then my priorities would change.
:mega: Announcement
We are excited to announce a new (long requested) feature in the helm-controller - drift detection! This is now available in
>=v0.37.0
, and can be enabled by configuring aHelmRelease
with.spec.driftDetection.mode
set toenabled
.To enable drift detection without correction, set
.mode
towarn
.:mag_right: What is drift detection?
Drift detection allows you to detect any unintentional changes to a resource in your Kubernetes cluster that may have occurred outside of your Helm release process. The feature uses the same approach as kustomize-controller to detect drift by performing a dry-run Server Side Apply of the rendered manifests of a release. When drift is detected, the controller will emit an Event and recreate and/or patch the Kubernetes resources.
:boom: Current limitations
--log-level
is set todebug
. In the Kubernetes Event, only the creation or change of a Kubernetes resource is reported with a brief summary of the changes.flux diff
command available (yet) to manually inspect or detect drift.:books: References
:phone: Request for Feedback
Please note that this feature is still in its early stages and lacks certain UX features. However, we encourage you to try it out and provide feedback on your experience with it, including any issues you encounter or suggestions for improvements.
Thank you for your help in making the controller even more powerful and reliable!