Open MeNsaaH opened 2 years ago
@MeNsaaH what problem would incrementing the revision solve? What tools/processes do you need to consume that information?
@crenshaw-dev I eventually opted to use Kustomize+helm plugin setup to add hooks to the manifest. However, some helm charts as I've said use the revision in generating unique manifests, for instance in jobs. It creates a new job for every revision.
That makes sense. I think to preserve backwards-compatibility, we'd have to make this opt-in.
@crenshaw-dev I'd like to work on this. Is there any docs for first time contributors?
Thanks for being willing to contribute! Our contribution docs are too much and not enough. There's plenty of material, but much of it seems out of date. https://argo-cd.readthedocs.io/en/stable/developer-guide/code-contributions/
I keep an eye on #argo-contributors in CNCF Slack. If you run into trouble, feel free to ask for help there!
Thinking about the design of this feature... should current-release-revision be a field in the App spec? If the App resource is managed by Argo CD, what do we do when the current release revision in git doesn't match the one on the cluster?
Maybe the info should be stored in the status field. I'm not familiar enough with CRD design to know whether that's an appropriate place to store that kind of information.
Thanks for being willing to contribute! Our contribution docs are too much and not enough. There's plenty of material, but much of it seems out of date. argo-cd.readthedocs.io/en/stable/developer-guide/code-contributions
I keep an eye on #argo-contributors in CNCF Slack. If you run into trouble, feel free to ask for help there!
Thanks. I'll take a look
Thinking about the design of this feature... should current-release-revision be a field in the App spec? If the App resource is managed by Argo CD, what do we do when the current release revision in git doesn't match the one on the cluster?
Maybe the info should be stored in the status field. I'm not familiar enough with CRD design to know whether that's an appropriate place to store that kind of information.
It should be part of the status field. since it's going to be generated by Argocd
When would the version number bump? In helm it's every time you run an upgrade. Would that map to a sync in argo or something else? Seems per-sync would be quite noisy.
I believe we could use the revision for this. Most of the time this is the commit-sha of the application.
I've checke this out @crenshaw-dev . I seems helm does not expose how to set the revision number. The .Revision
object is generated internally by helm.
Looking at how Argocd uses helm by calling helm template <ARGS>
, it'll be kinda difficult to do this.
Hi,
This is something that we are looking as well, to run some init containers only when .Release.Revision is 1 (meaning after the first sync), which corresponds to the creation of a feature environment in our case.
ArgoCD has its own system for maintaining History and Rollback revisions, I think this would provide good enough ground for the revision number?
@remidebette sure. Argocd has it's way of maintaining History and Rollbacks. However, argocd cannot modify the helm .Revision
object. How do you suggest such a variable to be passed?
Hi, thanks for your response. I did not understand this was the issue.
I do not know about the helm inners. Would it be possible to reach out to helm people to see if this can be injected in the template command?
I doubt that will be possible. I think helm is designed to generate those variables internally and I don't see them allowing those to be modified externally.
any update on this? maybe something like taking the revision number generated from argo and passing it in as a value to where the manifests would normally look for helms revision number? or something like this? for the migration job example its mostly looking for a new container number to be generated because it looks for when the job completes. (i.e. migration-job-1x and the init containers for the other pods check for migration-job-1x to be completed) and then on each deployment it iterates based on revision number
I suffered from the lack of this feature and so just wanted to point out the improvement/workaround I made in our helm charts when moving to argo-cd. Basically the new {{ include "myapp.initJobName" $ }}
below can be used in places that depend on the init job, and it will only change when certain attributes of the init job like the image/tag change. This actually means you run migrations less (because the hash does not change on every deploy) which is closer to optimal, but requires the extra legwork here. Hope this idea helps someone.
{{/*
Define the name of the init job.
This should change whenever the core image tag changes so that we are running
the latest migrations before deploying.
*/}}
{{- define "myapp.initJobName" -}}
{{- $key := cat (include "myapp.image" .) .Values.imagePullPolicy }}
{{- $key := cat $key (include "myapp.labels" (dict "context" .)) }}
{{- $key := cat $key (toYaml .Values.init.podLabels) }}
{{- $key := cat $key (toYaml .Values.init.podAnnotations) }}
{{- $key := cat $key (toYaml .Values.init.resources) }}
{{- $key := cat $key .Values.waitFor.image .Values.waitFor.imagePullPolicy }}
{{- $key := cat $key (toYaml .Values.env) }}
{{- $key := cat $key (toYaml (include "myapp.podSchedulingConstraints" (dict "context" . "spec" .Values.init "name" "init"))) }}
{{- if .Values.forceUpdate }}
{{- $key = cat $key (quote .Values.forceUpdate) }}
{{- end }}
{{- $hashKey := sha1sum $key | substr 0 8 }}
{{- include "myapp.fullname" . }}-init-{{ $hashKey }}
{{- end }}
Ahh interesting, what are you using to check for job completeness, we are using k8s wait for that sleeps and detects completeness by checking the job name , what solution did you use?
Prior to argo-cd I was (and still am) using k8s-wait-for but argo-cd has sync waves which also help coordinate the rollout. I've never deployed it with only sync waves so I'm not sure they fully solve the problem. You can see here what I apply to a deployment that waits for the job before starting. It does require granting the deployment's service account the ability to query pod/job statuses in the namespace.
initContainers:
- name: wait-for-init-job
image: {{ .Values.waitFor.image }}
imagePullPolicy: {{ .Values.waitFor.imagePullPolicy }}
args:
- "job-wr"
- {{ include "myapp.initJobName" . }}
sweet will try this out, thats exactly my setup.. thanks
any update on this? maybe something like taking the revision number generated from argo and passing it in as a value to where the manifests would normally look for helms revision number? or something like this? for the migration job example its mostly looking for a new container number to be generated because it looks for when the job completes. (i.e. migration-job-1x and the init containers for the other pods check for migration-job-1x to be completed) and then on each deployment it iterates based on revision number
@webcoderz unfortunately this will be very difficult to implement because Helm does not allow modifying the Release
object from the helm template
command which is what Argocd uses. So, Argocd cannot technically modify those variables. Maybe in future there might be a way to support this, but I don't see how to do it right now.
yea actually seems @mmerickel had the right idea , i ended up making a sha of the tag {{- sha1sum .Values.tag | substr 0 8 }}
and that seems to work well
Checklist:
argocd version
.Describe the bug
To Reproduce
Create a helm chart that uses
.Release.Revision
param within any of it's manifest. This Value is always set to 1 regardless of Revision. Here's a typical example: https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/charts/gitlab/charts/migrations/templates/_helpers.tplExpected behavior
Argocd should inject
.Release.Revision
to autoincrement after every sync with a diff.Screenshots
Version
Paste any relevant application logs here.