Closed joaocc closed 1 day ago
For your particular use case, there is a guide here https://fluxcd.io/flux/use-cases/running-jobs/. You may run your job with a kustomization
and your helmRelease
with another dependent kustomization
.
See also the discussion here: https://github.com/fluxcd/flux2/discussions/2324
Thanks! We will try that and provide feedback.
This issue is related to a use case we are trying to address using only flux. In this case, we have a parent chart with several child charts. One of these charts is a init job (one of the child charts) which must be waited-on by the other deployments (the other child charts). In this case, we are trying to run dotnet migrations on the init job, as per https://andrewlock.net/deploying-asp-net-core-applications-to-kubernetes-part-8-running-database-migrations-using-jobs-and-init-containers/ and https://andrewlock.net/deploying-asp-net-core-applications-to-kubernetes-part-9-monitoring-helm-releases-that-use-jobs-and-init-containers/.
In a plain helm world, this works quote well, as we can name the job with a release number, so that the deployments of a release wait on the right job and not on some others that may still be running.
However, when using helmrelease, this breaks down, due to the fact that retries and rollbacks interfere in manners where a single release number can have different "attempts", thus jobs.
We tried to work around this by adding a timestamp to the job name (via tpl expressions, as values.yaml is static), but there edge cases where helm seems to evaluates some subcharts in a different second than the init job, leading to deployments waiting for the wrong job.
We then thought that we could use the hash version and release number (as we use
reconcileStrategy: Revision
), but it seems that only the evaluation on the parent chart includes the git SHA appended to the version, while the child ones use "plain" chart versions.(sorry for the long intro).
<.Release.Revision>--<.Release.RetryNumber>
that was guaranteed to be unique.Thanks