argoproj / argo-cd

Declarative Continuous Deployment for Kubernetes
https://argo-cd.readthedocs.io
Apache License 2.0
17.54k stars 5.34k forks source link

Helm .Release.Revision is always 1 #8643

Open MeNsaaH opened 2 years ago

MeNsaaH commented 2 years ago

Checklist:

Describe the bug

To Reproduce

Create a helm chart that uses.Release.Revision param within any of it's manifest. This Value is always set to 1 regardless of Revision. Here's a typical example: https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/charts/gitlab/charts/migrations/templates/_helpers.tpl

Expected behavior

Argocd should inject .Release.Revision to autoincrement after every sync with a diff.

Screenshots

Version

argocd: v2.1.6+a346cf9.dirty
  BuildDate: 2021-11-01T02:05:06Z
  GitCommit: a346cf933e10d872eae26bff8e58c5e7ac40db25
  GitTreeState: dirty
  GoVersion: go1.17.2
  Compiler: gc
  Platform: darwin/amd64
argocd-server: v2.2.3+987f665
  BuildDate: 2022-01-18T17:53:49Z
  GitCommit: 987f6659b88e656a8f6f8feef87f4dd467d53c44
  GitTreeState: clean
  GoVersion: go1.16.11
  Compiler: gc
  Platform: linux/amd64
  Ksonnet Version: v0.13.1
  Kustomize Version: v4.2.0 2021-06-30T22:49:26Z
  Helm Version: v3.8.0+gd141386
  Kubectl Version: v0.22.2
  Jsonnet Version: v0.17.0```

**Logs**

Paste any relevant application logs here.

crenshaw-dev commented 2 years ago

@MeNsaaH what problem would incrementing the revision solve? What tools/processes do you need to consume that information?

MeNsaaH commented 2 years ago

@crenshaw-dev I eventually opted to use Kustomize+helm plugin setup to add hooks to the manifest. However, some helm charts as I've said use the revision in generating unique manifests, for instance in jobs. It creates a new job for every revision.

crenshaw-dev commented 2 years ago

That makes sense. I think to preserve backwards-compatibility, we'd have to make this opt-in.

MeNsaaH commented 2 years ago

@crenshaw-dev I'd like to work on this. Is there any docs for first time contributors?

crenshaw-dev commented 2 years ago

Thanks for being willing to contribute! Our contribution docs are too much and not enough. There's plenty of material, but much of it seems out of date. https://argo-cd.readthedocs.io/en/stable/developer-guide/code-contributions/

I keep an eye on #argo-contributors in CNCF Slack. If you run into trouble, feel free to ask for help there!

crenshaw-dev commented 2 years ago

Thinking about the design of this feature... should current-release-revision be a field in the App spec? If the App resource is managed by Argo CD, what do we do when the current release revision in git doesn't match the one on the cluster?

Maybe the info should be stored in the status field. I'm not familiar enough with CRD design to know whether that's an appropriate place to store that kind of information.

MeNsaaH commented 2 years ago

Thanks for being willing to contribute! Our contribution docs are too much and not enough. There's plenty of material, but much of it seems out of date. argo-cd.readthedocs.io/en/stable/developer-guide/code-contributions

I keep an eye on #argo-contributors in CNCF Slack. If you run into trouble, feel free to ask for help there!

Thanks. I'll take a look

MeNsaaH commented 2 years ago

Thinking about the design of this feature... should current-release-revision be a field in the App spec? If the App resource is managed by Argo CD, what do we do when the current release revision in git doesn't match the one on the cluster?

Maybe the info should be stored in the status field. I'm not familiar enough with CRD design to know whether that's an appropriate place to store that kind of information.

It should be part of the status field. since it's going to be generated by Argocd

mmerickel commented 2 years ago

When would the version number bump? In helm it's every time you run an upgrade. Would that map to a sync in argo or something else? Seems per-sync would be quite noisy.

MeNsaaH commented 2 years ago

I believe we could use the revision for this. Most of the time this is the commit-sha of the application.

MeNsaaH commented 2 years ago

I've checke this out @crenshaw-dev . I seems helm does not expose how to set the revision number. The .Revision object is generated internally by helm.

Looking at how Argocd uses helm by calling helm template <ARGS>, it'll be kinda difficult to do this.

remidebette commented 2 years ago

Hi,

This is something that we are looking as well, to run some init containers only when .Release.Revision is 1 (meaning after the first sync), which corresponds to the creation of a feature environment in our case.

ArgoCD has its own system for maintaining History and Rollback revisions, I think this would provide good enough ground for the revision number?

MeNsaaH commented 2 years ago

@remidebette sure. Argocd has it's way of maintaining History and Rollbacks. However, argocd cannot modify the helm .Revision object. How do you suggest such a variable to be passed?

remidebette commented 2 years ago

Hi, thanks for your response. I did not understand this was the issue.

I do not know about the helm inners. Would it be possible to reach out to helm people to see if this can be injected in the template command?

MeNsaaH commented 2 years ago

I doubt that will be possible. I think helm is designed to generate those variables internally and I don't see them allowing those to be modified externally.

webcoderz commented 2 years ago

any update on this? maybe something like taking the revision number generated from argo and passing it in as a value to where the manifests would normally look for helms revision number? or something like this? for the migration job example its mostly looking for a new container number to be generated because it looks for when the job completes. (i.e. migration-job-1x and the init containers for the other pods check for migration-job-1x to be completed) and then on each deployment it iterates based on revision number

mmerickel commented 2 years ago

I suffered from the lack of this feature and so just wanted to point out the improvement/workaround I made in our helm charts when moving to argo-cd. Basically the new {{ include "myapp.initJobName" $ }} below can be used in places that depend on the init job, and it will only change when certain attributes of the init job like the image/tag change. This actually means you run migrations less (because the hash does not change on every deploy) which is closer to optimal, but requires the extra legwork here. Hope this idea helps someone.

{{/*
Define the name of the init job.

This should change whenever the core image tag changes so that we are running
the latest migrations before deploying.
*/}}
{{- define "myapp.initJobName" -}}
{{- $key := cat (include "myapp.image" .) .Values.imagePullPolicy }}
{{- $key := cat $key (include "myapp.labels" (dict "context" .)) }}
{{- $key := cat $key (toYaml .Values.init.podLabels) }}
{{- $key := cat $key (toYaml .Values.init.podAnnotations) }}
{{- $key := cat $key (toYaml .Values.init.resources) }}
{{- $key := cat $key .Values.waitFor.image .Values.waitFor.imagePullPolicy }}
{{- $key := cat $key (toYaml .Values.env) }}
{{- $key := cat $key (toYaml (include "myapp.podSchedulingConstraints" (dict "context" . "spec" .Values.init "name" "init"))) }}
{{- if .Values.forceUpdate }}
  {{- $key = cat $key (quote .Values.forceUpdate) }}
{{- end }}
{{- $hashKey := sha1sum $key | substr 0 8 }}
{{- include "myapp.fullname" . }}-init-{{ $hashKey }}
{{- end }}
webcoderz commented 2 years ago

Ahh interesting, what are you using to check for job completeness, we are using k8s wait for that sleeps and detects completeness by checking the job name , what solution did you use?

mmerickel commented 2 years ago

Prior to argo-cd I was (and still am) using k8s-wait-for but argo-cd has sync waves which also help coordinate the rollout. I've never deployed it with only sync waves so I'm not sure they fully solve the problem. You can see here what I apply to a deployment that waits for the job before starting. It does require granting the deployment's service account the ability to query pod/job statuses in the namespace.

      initContainers:
        - name: wait-for-init-job
          image: {{ .Values.waitFor.image }}
          imagePullPolicy: {{ .Values.waitFor.imagePullPolicy }}
          args:
            - "job-wr"
            - {{ include "myapp.initJobName" . }}
webcoderz commented 2 years ago

sweet will try this out, thats exactly my setup.. thanks

MeNsaaH commented 2 years ago

any update on this? maybe something like taking the revision number generated from argo and passing it in as a value to where the manifests would normally look for helms revision number? or something like this? for the migration job example its mostly looking for a new container number to be generated because it looks for when the job completes. (i.e. migration-job-1x and the init containers for the other pods check for migration-job-1x to be completed) and then on each deployment it iterates based on revision number

@webcoderz unfortunately this will be very difficult to implement because Helm does not allow modifying the Release object from the helm template command which is what Argocd uses. So, Argocd cannot technically modify those variables. Maybe in future there might be a way to support this, but I don't see how to do it right now.

webcoderz commented 2 years ago

yea actually seems @mmerickel had the right idea , i ended up making a sha of the tag {{- sha1sum .Values.tag | substr 0 8 }} and that seems to work well