Open shimmerjs opened 5 years ago
Any chance this gets fixed?
Just ran into this as well when working with Jobs.
So our (awkward and somewhat anti-pattern-ish for CI purists, but very acceptable to use and surprisingly not as hack job-ish as I thought it'd be) to be able to run Jobs in Argo with generated names is as follows and is working quite well, actually but is not a "code-based" workaround as much as an alternative way of accomplishing the same thing using different processes:
namesuffix
for all resources in it, which, of course, is statically configured in the files, but this becomes important in the next step.kustomization.yaml
file that belongs to the base referenced in 1, then it follows the "normal" path of triggering an ArgoCD sync (via webhook). If the changes DO include a change to the customization.yaml
file that belongs to the base referenced in 1, it goes to every occurrence of the base (we have multiple apps that use this pattern now, so it's a for loop with that glob basically) and runs kustomize edit set namesuffix
head -c 32 /dev/urandom | md5 | tr -dc 'a-zA-Z0-9' | fold -w 6 | head -n 1-
date +%s` against each base Kustomization. This is the critical part, because now we can use
namesuffixas the workaround for the fact
generateName` is inoperable.namesuffix
is set to something random and unique.Unsure if this is needed, but we also add the following annotations to all Job manifests:
argocd.argoproj.io/sync-options: Replace=true, Force=true
argocd.argoproj.io/hook: PostSync, SyncFail
argocd.argoproj.io/hook-delete-policy: HookFailed, HookSucceeded
argocd.argoproj.io/sync-wave: "-1"
*This GH Action has a lot of conditionals and safety checks to avoid loops. Think long and hard about this. You can and will get weirdo edge cases that will cost you a lot of money if you don't! We use self-hosted runners for most stuff, but still, those have to run somewhere...cheaper than GH minutes by far, but still not free.
**Always a good idea to have a billing ceiling on this stuff, but especially. here IMO. The chance for infinite loops is not nil. Easier to have to go up your low limit every few days than run up a huge bill.
We worked around the Job name problem by setting ttlSecondsAfterFinished: 0
[1] so the Job is removed automatically after completion. The next time the Job is created, the previous one is already deleted.
[1] https://kubernetes.io/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically
Still no work around yet for this. I find myself moving from kustomize to helm charts just because of this 🤦🏿
I don't want to get anyone's hopes up, but here's a PR that works in local testing: https://github.com/kubernetes-sigs/kustomize/pull/4981.
I don't understand kustomize
enough to know if this is really sufficient, but I look forward to comments pointing me in the right direction.
Looks like these questions on #4838 need answering and comprehensive tests need implementing.
It's not just a case of changing the validation to allow it.
Indeed, my PR was too naive there, oh well. I closed it and hope #4838 moves forward.
I also have just run into this issue while attempting to use generateName
on Jobs. Would be nice to see some fix or official workaround communicated.
I am pasting my comment from https://github.com/kubernetes-sigs/kustomize/pull/4838#issuecomment-1295357914 for visibility:
I think before we can accept this PR, we need to agree on several details. Again, some more things that come to mind:
- How
generateName
interacts with namePrefix/nameSuffix. I see above you suggest thatgenerateName
should not interact withnamePrefix
ornameSuffix
. That is a valid opinion, and allowing name transformations ongenerateName
would complicate name references, but I need to think a bit more about pros/cons.How
generateName
interacts withpatches
? How does a patch target such a resource? For example, some options that I can think of:
- Add a new field to the
patch
targets that allow selection based on thegenerateName
field.- Keep patch targets as is, and only allow such resources to be targeted by their GVK, labels, and/or annotations.
- We will also need to think about if we should allow the patch to change the value of
generateName
. Patches are allowed to change thename
field, so it may be expected that we eventually support this too.- Same as the above, but with
replacements
.- What should kustomize do if there are multiple objects with the same
generateName
? Should kustomize allow this, and if so, how can we differentiate between the identical resource IDs? For example,reswrangler
doesn't let you add two resources with the same ID. I haven't looked carefully at your tests yet, but we should make sure that we have tests covering this.- This PR doesn't seem to touch
reswrangler
orresource
at all. That surprises me, as that is where a lot of resource-identifying code is. Maybe you're right that we actually don't need to touch it at all, but I think it would be helpful to take a closer look at that code and see if it makes sense to add support forgenerateName
there.I plan to bring this PR up with the other kustomize maintainers but if you have thoughts on these points feel free to share here.
@KnVerey can you think of anything else we need to make sure we think about?
Edited to add: I talked with the other maintainers and I think we need to see tests that show the use of
generateName
with all the other generators/transformers fields of kustomize so that we can see how it would behave.
I think to move this issue forward we would need a mini in-repo KEP to fully flesh out all of these details.
Any updates for that?
The patch workaround works neither for
op: move
norop: remove
.- op: remove path: /metadata/name
- op: move from: /metadata/name path: /metadata/generateName
this leads to:
panic: number of previous names, number of previous namespaces, number of previous kinds not equal
I'm using kustomize v5
/assign
Sorry, the issue is a bit too complicated for me. I may come back later, but please take it if anyone else can 🙇 /unassign
I also ran into same issue with argocd
this same issue is happening if you try to kustomize the knative-operator as it requires a job
apiVersion: batch/v1
kind: Job
metadata:
generateName: storage-version-migration-operator-
this does limit the usability of kustomize for this usecase unfortunately, and with this being a built in for kubectl i would expect it to at least match the current rules defined for the builtin spec's
is there anything "chop wood, carry water" that i can do to help move this forwards?
How generateName interacts with namePrefix/nameSuffix. I see above you suggest that generateName should not interact with namePrefix or nameSuffix. That is a valid opinion, and allowing name transformations on generateName would complicate name references, but I need to think a bit more about pros/cons.
it should be able to interact with namePrefix it should not be able to interact with nameSuffix as generateName handles that internally.
but I'm 100% ok with being ignored by both if we can get the functionally working.
How generateName interacts with patches? How does a patch target such a resource? For example, some options that I can think of:
Add a new field to the patch targets that allow selection based on the generateName field.
This ^
Keep patch targets as is, and only allow such resources to be targeted by their GVK, labels, and/or annotations.
This is also a valid usecase
patches:
- path: <relative path to file containing patch>
target:
group: batch
version: v1
kind: Job
name: <optional name or regex pattern> # OR
generateName: <the given .metadata.generateName prefix> # but not BOTH, and only for the kinds that accept them
namespace: <optional namespace>
labelSelector: <optional label selector>
annotationSelector: <optional annotation selector>
We will also need to think about if we should allow the patch to change the value of generateName. Patches are allowed to change the name field, so it may be expected that we eventually support this too.
we should be able to modify the /metadata/generateName field in the same way we could the /metadata/name field using patches
Same as the above, but with replacements.
we should be able to modify the /metadata/generateName field in the same way we could the /metadata/name field using replacements.
What should kustomize do if there are multiple objects with the same generateName? Should kustomize allow this, and if so, how can we differentiate between the identical resource IDs? For example, reswrangler doesn't let you add two resources with the same ID. I haven't looked carefully at your tests yet, but we should make sure that we have tests covering this.
kustomize should not allow multiple like-kind resources example jobs.batch/v1
objects with the same generateName field as that would be abnormal for how the spec is used; the generateName field for the same Kind should be unique within the kustomize output.
This PR doesn't seem to touch reswrangler or resource at all. That surprises me, as that is where a lot of resource-identifying code is. Maybe you're right that we actually don't need to touch it at all, but I think it would be helpful to take a closer look at that code and see if it makes sense to add support for generateName there.
I don't know enough to state my opinion on this ^
Also ran into this issue.
Also ran into this issue.
And 5 years later we have no solution.
Not sure what's the best way to get developer's attention to this... Maybe opening a discussion? Is there an official slack?
We also just ran into this. All related PR's got closed so what's the plan now? What exactly can we do to bring this forward?
I'm working on a project where Kustomize was selected for ArgoCD. We were hoping to extend its use to our Argo Workflows but this is a significant impediment. Deleting previous workflow instances as was suggested in one comment does not align with our operational requirements. I'm sure there are plenty of usecases outside of Argo Workflow that are also affected by this issue.
Please provide a solution or how we can support the kustomize team closing this issue - it is blocking our pipeline development in Argo Workflow at this moment as we want to kustomize pipelines.
I ran into this. I want found the solution but failed. I tried like below and be OK!
patches:
- patch: |
- op: replace
path: /metadata
value:
generateName: data-migration-
target:
kind: Job
I ran into this. I want found the solution but failed. I tried like below and be OK!
patches: - patch: | - op: replace path: /metadata value: generateName: data-migration- target: kind: Job
Oh. No, it just ok for kubectl kustomize
, but failed with argocd. And, i run with kustomize build
would be failed too.
kustomize version: v4.4.0
I ran into this. I want found the solution but failed. I tried like below and be OK!
patches: - patch: | - op: replace path: /metadata value: generateName: data-migration- target: kind: Job
Oh. No, it just ok for
kubectl kustomize
, but failed with argocd. And, i run withkustomize build
would be failed too. kustomize version: v4.4.0
Kubectl kustomize is literally kustomize. And the one in argocd as well. There is no difference, except for the version that is used.
Repeating what I wrote in https://github.com/kubernetes-sigs/kustomize/issues/641#issuecomment-1684304344:
To move this issue forward we would need a mini in-repo KEP to fully flesh out all the design and implementation details of how this feature would be handled.
The interesting part is, who writes that proposal. And why would, for example, the community write this one and the maintainers some other?
For people looking for a solution- this utilizes an exec krm plugin. It depends on yq and openssl rand.
Folder structure
base/
job-name-generator.yaml
job.yaml
kustomization.yaml
plugins/
job-name-generator.sh
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources: []
# The job is added by the generator
# - job.yaml
generators:
- job-name-generator.yaml
job.yaml
apiVersion: batch/v1
kind: Job
metadata:
generateName: schema-migrate-
spec:
template:
spec: {}
job-name-generator.yaml
apiVersion: kustomize.example.com/v1
kind: JobNameGenerator
metadata:
name: schema-migrate
annotations:
config.kubernetes.io/function: |
exec:
path: ../plugins/job-name-generator.sh
spec:
resourcePath: ./job.yaml
job-name-generator.sh
#!/usr/bin/env bash
# read the `kind: ResourceList` from stdin
resourceList=$(cat)
# extract the resource path
export resourcePath=$(echo "${resourceList}" | yq e '.functionConfig.spec.resourcePath' - )
# generate the job hash
export job_hash=$(openssl rand -hex 3 | cut -c 1-5)
# dump the job into the output ResourceList, add name from generateName + the job hash, and delete generateName
echo "
kind: ResourceList
items: []
" | yq e 'load(env(resourcePath)) as $resource | .items += $resource | .items[0].metadata.name = .items[0].metadata.generateName + env(job_hash) | del(.items[0].metadata.generateName)' -
I am trying to use
kustomize
with https://github.com/argoproj/argo.Example spec:
Argo Workflow CRDs don't require or use
metadata.name
, but I am getting the following error when I try to runkustomize build
on an Argo Workflow resource:Is there a way for me to override where
kustomize
looks for a name tometadata.generateName
?