Open absnmohammed opened 1 year ago
Flux doesn't make any controlling changes to the manifests that it applies, it is unlikely that the issue is in Flux but more likely that it is an issue in your chart. You are several minor releases behind in Helm Controller, the latest is v0.28.x, and while I think this is not likely to make any difference in this issue, it's always my first advice to try this again on the latest version.
The only major way that Flux's helm upgrade
differs from Helm upstream (the CLI) is that it enables resource waiting by default. So if your HelmRelease has some resource that does not become ready, it will hold up the whole release and Helm will eventually report the release as failed (whereas Helm CLI's default behavior is more like "fire-and-forget" and it will blissfully consider the release upgrade as a success, even if some component/service/ingress does not become ready.)
https://fluxcd.io/flux/components/helm/helmreleases/#disabling-resource-waiting
This is configurable, you can enable or disable resource waiting to get more similar behavior to the default of Helm. This is not likely to be related to the issue reported, but just to highlight there is very little difference between Helm as Flux does vs Helm as the Helm CLI does.
You will likely find there is similar behavior if you try upgrading the Helm chart by hand in your cluster, (this is one way you could attempt to confirm or refute that there is any behavior difference between Helm vs Flux's Helm Controller)
If you can share the chart and HelmRelease, we can possibly shed some additional light on how this happens!
Hi @kingdonb, thank you for the comment.
I have upgraded the helm controller version to v0.28.1
.
Our helm chart sits in a different repository and we are using a mono-chart
. We have created profiles inside the mono-chart for each service like the following and referencing these as valueFiles in helmRelease. Each time developers making this change, we have to bump the chart version which is forcing all the release to recreate pods. I know we can reference the chart version in invidual helm release but devs don't want to make number of changes at different place, fear of losing the track.
.
├── Chart.yaml
├── profiles
│ ├── audit
│ │ ├── common.yaml
│ │ ├── dev.yaml
│ │ └── prod.yaml
│ ├── audit-advance-etl
│ │ ├── common.yaml
│ │ ├── dev.yaml
│ │ └── prod.yaml
│ ├── audit-reporting
│ │ ├── common.yaml
│ │ ├── dev.yaml
│ │ └── prod.yaml
│ ├── audit-snowflake-etl
│ │ ├── common.yaml
│ │ ├── dev.yaml
│ │ └── prod.yaml
│ ├── auth-advance-etl
│ │ ├── common.yaml
│ │ ├── dev.yaml
│ │ └── prod.yaml
│ ├── common-dev.yaml
│ ├── common-prod.yaml
│ ├── company
│ │ ├── common.yaml
│ │ ├── dev.yaml
│ │ └── prod.yaml
│ ├── company-advance-etl
│ │ ├── common.yaml
│ │ ├── dev.yaml
│ │ └── prod.yaml
│ ├── email
│ │ ├── common.yaml
│ │ ├── dev.yaml
│ │ └── prod.yaml
│ ├── file
│ │ ├── common.yaml
│ │ ├── dev.yaml
│ │ └── prod.yaml
├── templates
│ ├── _helpers.tpl
│ ├── configmap.yaml
│ ├── deployment.yaml
│ ├── ingress.yaml
│ ├── pdb.yaml
│ ├── prometheus.yaml
│ ├── rbac.yaml
│ ├── s3bucket.yaml
│ ├── secrets.yaml
│ ├── service.yaml
│ └── serviceaccount.yaml
└── values.yaml
Our flux repo folder structure is like the following, one of the recommend approach by Flux.
├── base
│ ├── audit
│ │ ├── kustomization.yaml
│ │ └── release.yaml
│ ├── audit-advance-etl
│ │ ├── kustomization.yaml
│ │ └── release.yaml
│ ├── audit-apsca-notification-email-scheduler
│ │ ├── kustomization.yaml
│ │ └── release.yaml
│ ├── audit-apsca-sync-scheduler
│ │ ├── kustomization.yaml
│ │ └── release.yaml
│ ├── audit-reporting
│ │ ├── kustomization.yaml
│ │ └── release.yaml
│ ├── audit-snowflake-etl
│ │ ├── kustomization.yaml
│ │ └── release.yaml
│ ├── auth-advance-etl
│ │ ├── kustomization.yaml
│ │ └── release.yaml
│ ├── company
│ │ ├── kustomization.yaml
│ │ └── release.yaml
│ ├── company-advance-etl
│ │ ├── kustomization.yaml
│ │ └── release.yaml
├── dev
│ ├── audit
│ │ ├── kustomization.yaml
│ │ └── values.yaml
│ ├── audit-advance-etl
│ │ ├── kustomization.yaml
│ │ └── values.yaml
│ ├── audit-apsca-notification-email-scheduler
│ │ ├── kustomization.yaml
│ │ └── values.yaml
│ ├── audit-apsca-sync-scheduler
│ │ ├── kustomization.yaml
│ │ └── values.yaml
│ ├── audit-reporting
│ │ ├── kustomization.yaml
│ │ └── values.yaml
│ ├── audit-snowflake-etl
│ │ ├── kustomization.yaml
│ │ └── values.yaml
│ ├── auth-advance-etl
│ │ ├── kustomization.yaml
│ │ └── values.yaml
│ ├── auth-advance-etl-new
│ │ └── values.yaml
│ ├── company
│ │ ├── kustomization.yaml
│ │ └── values.yaml
│ ├── company-advance-etl
│ │ ├── kustomization.yaml
│ │ └── values.yaml
├── sample
│ ├── base
│ │ ├── kustomization.yaml
│ │ └── release.yaml
│ ├── cron
│ │ ├── kustomization.yaml
│ │ └── values.yaml
│ ├── environment
│ │ ├── kustomization.yaml
│ │ └── values.yaml
│ └── imagepolicy-repo
│ └── sample-imagepolicy-repo.yaml
└── test
├── configmap-secrets
│ └── kustomization.yaml
├── crdb-config
│ ├── kustomization.yaml
│ └── test-values.yaml
├── kustomization.yaml
├── link-advance-etl
│ ├── kustomization.yaml
│ └── values.yaml
├── living-docs
│ ├── kustomization.yaml
│ └── values.yaml
└── podinfo
├── kustomization.yaml
└── podinfo-values.yaml
This is how we are referencing the profiles from helm chart mono-repo.
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: audit-reporting
namespace: application
spec:
chart:
spec:
valuesFiles:
- ./charts/mono-chart/values.yaml
- ./charts/mono-chart/profiles/common-dev.yaml
- ./charts/mono-chart/profiles/audit-reporting/common.yaml
- ./charts/mono-chart/profiles/audit-reporting/dev.yaml
values:
image:
Note: if we are passing the values without touching helm chart via Flux it does rolling update for the deployment.
Our deployment yaml in mono-repo looks like the following,
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
labels:
app: {{ .Values.name }}
annotations:
{{- include "reloader_anotation" . | indent 4 }}
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: {{ .Values.name }}
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
app: {{ .Values.name }}
spec:
{{- if .Values.serviceAccountName }}
serviceAccountName: {{ .Values.serviceAccountName }}
Please let me know if you need further information or have questions. Thank you.
Can someone please looks into this and shed some light?
@absnmohammed can you show the full HelmRelease manifest?
@gladiatr72, thanks for picking this up. I forgot to update the thread, this has been fixed after an upgrade to latest helm_controller
.
FYI, deployment.yaml is using default rolling updates strategy,
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
labels:
app: {{ .Values.name }}
annotations:
{{- include "reloader_anotation" . | indent 4 }}
{{- toYaml .Values.deploymentAnnotations | nindent 4 }}
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: {{ .Values.name }}
strategy:
type: RollingUpdate
Schweet! 👍
Hi,
We have a scenario where we are making change to values in Helm charts and updating the chart version. Flux doing all at once deployment instead of rolling update which is a default deployment strategy.
This is resulting in the downtime, it works fine if the values change via Flux repository instead of helm chart repo.
helm-controller:v0.22.1
Please let me know if this is a bug or if am doing something wrong here.