Open shimmerjs opened 5 years ago
Similar issues #627, #586
This bug and #586 are noting that kustomize doesn't recognize the kubernetes API directive generateName
, which is indeed a bug.
This directive is a kustomize-like feature introduced before kustomize... (complicating our life).
We might try to allow it and work with it - or disallow it and provide an alternative mechanism.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten
I wanted to use generateName with kustomize but I can't :(
I wanted to use generateName with kustomize too.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
This issue should be reopened unless it has been solved and the docs don't show it.
Agreed, let's re-open and solve the issue.
anybody can open with @k8s-ci-robot robot command. I've already opened, I don't want to flood it :)
/reopen
I've just stumbled across this issue as well, and would appreciate a fix or an alternative mechanism (as mentioned by monopole above).
@haimberger: You can't reopen an issue/PR unless you authored it or you are a collaborator.
/reopen
sigh.
@Datamance: You can't reopen an issue/PR unless you authored it or you are a collaborator.
Can someone with The Power reopen this? Still outstanding AFAICT.
/remove-lifecycle rotten
The workaround I've used for Argo specifically is to define my workflow with a name
:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
name: hello-world-
spec:
...
And then tell Kustomize to move that to generateName
as the last patch:
resources:
- hello-world.yaml
patches:
- patch: |-
- op: move
from: /metadata/name
path: /metadata/generateName
target:
kind: Workflow
This is obviously not very good, but it does let us use Kustomize with Argo (and without writing a Kustomize plugin).
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Is there any progress?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
The workaround I've used for Argo specifically is to define my workflow with a
name
:apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: name: hello-world- spec: ...
And then tell Kustomize to move that to
generateName
as the last patch:resources: - hello-world.yaml patches: - patch: |- - op: move from: /metadata/name path: /metadata/generateName target: kind: Workflow
This is obviously not very good, but it does let us use Kustomize with Argo (and without writing a Kustomize plugin).
But we need to change the generateName to name not the other way around. I tried this but still get the same error, metadata.name missing
Is there a progress update on this? It seems like a pretty big limitation. Not being able to us Kustomize with CRDs such as those in Argo Workflows is a huge drawback.
The workaround above does not work. I am using Kustomize v4.2.0. The following error is thrown. My files are shown below.
panic: number of previous names, number of previous namespaces, number of previous kinds not equal
goroutine 1 [running]:
sigs.k8s.io/kustomize/api/resource.(*Resource).PrevIds(...)
sigs.k8s.io/kustomize/api@v0.8.11/resource/resource.go:345
sigs.k8s.io/kustomize/api/resource.(*Resource).OrgId(0xc0003513b0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
sigs.k8s.io/kustomize/api@v0.8.11/resource/resource.go:328 +0x17a
sigs.k8s.io/kustomize/api/builtins.(*PrefixSuffixTransformerPlugin).Transform(0xc002a11d40, 0x4851050, 0xc00000f5f0, 0x0, 0x0)
sigs.k8s.io/kustomize/api@v0.8.11/builtins/PrefixSuffixTransformer.go:52 +0xa9
sigs.k8s.io/kustomize/api/internal/target.(*multiTransformer).Transform(0xc0000d34e8, 0x4851050, 0xc00000f5f0, 0xc002dabaa0, 0x6)
sigs.k8s.io/kustomize/api@v0.8.11/internal/target/multitransformer.go:30 +0x79
sigs.k8s.io/kustomize/api/internal/accumulator.(*ResAccumulator).Transform(...)
sigs.k8s.io/kustomize/api@v0.8.11/internal/accumulator/resaccumulator.go:142
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).runTransformers(0xc0002f9040, 0xc000366240, 0x0, 0x0)
sigs.k8s.io/kustomize/api@v0.8.11/internal/target/kusttarget.go:270 +0x225
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).accumulateTarget(0xc0002f9040, 0xc000366240, 0xc00022f610, 0x4449453, 0x0)
sigs.k8s.io/kustomize/api@v0.8.11/internal/target/kusttarget.go:195 +0x2b0
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).AccumulateTarget(0xc0002f9040, 0x0, 0xffffffffffffffff, 0x0)
sigs.k8s.io/kustomize/api@v0.8.11/internal/target/kusttarget.go:156 +0xce
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).makeCustomizedResMap(0xc0002f9040, 0x0, 0x0, 0x0, 0x1)
sigs.k8s.io/kustomize/api@v0.8.11/internal/target/kusttarget.go:111 +0x2f
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).MakeCustomizedResMap(...)
sigs.k8s.io/kustomize/api@v0.8.11/internal/target/kusttarget.go:107
sigs.k8s.io/kustomize/api/krusty.(*Kustomizer).Run(0xc0000d3d60, 0x484ea60, 0x4c2c0d8, 0x7ffeefbff96f, 0x1f, 0x0, 0x0, 0x0, 0x0)
sigs.k8s.io/kustomize/api@v0.8.11/krusty/kustomizer.go:88 +0x3dd
sigs.k8s.io/kustomize/kustomize/v4/commands/build.NewCmdBuild.func1(0xc000320580, 0xc00031b500, 0x1, 0x3, 0x0, 0x0)
sigs.k8s.io/kustomize/kustomize/v4/commands/build/build.go:80 +0x1c9
github.com/spf13/cobra.(*Command).execute(0xc000320580, 0xc00031b4d0, 0x3, 0x3, 0xc000320580, 0xc00031b4d0)
github.com/spf13/cobra@v1.0.0/command.go:842 +0x472
github.com/spf13/cobra.(*Command).ExecuteC(0xc000320000, 0x0, 0xffffffff, 0xc00002e0b8)
github.com/spf13/cobra@v1.0.0/command.go:950 +0x375
github.com/spf13/cobra.(*Command).Execute(...)
github.com/spf13/cobra@v1.0.0/command.go:887
main.main()
sigs.k8s.io/kustomize/kustomize/v4/main.go:14 +0x2a
cron-workflow.yaml
:
apiVersion: argoproj.io/v1alpha1
kind: CronWorkflow
metadata:
name: foo
spec:
entrypoint: main
templates:
- name: main
steps:
- - name: main
templateRef:
name: main
template: main
kustomization.yaml
:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: argo
resources:
- cron-workflow.yaml
patches:
- target:
kind: CronWorkflow
path: patch.yaml
patch.yaml
:
- op: move
from: /metadata/name
path: /metadata/generateName
Did anyone find a solution to this?
resources:
- hello-world.yaml
patches:
- patch: |-
- op: move
from: /metadata/name
path: /metadata/generateName
target:
kind: Workflow
used to work for me in 3.8.6
version of kustomize. But stopped working in the latest version i.e. 4.4.0
Same, the workaround used to work in 3.9.1
and stopped working with the 3.9.4
/assign
The workaround worked in 4.3.0 but stopped working for me in 4.4.0.
Incase you are using the output of kustomise, you can pipe it through sed
like in this blog post: kubernetes generated names
kustomize build | sed -e 's/^ name.*$//g' | kubectl create -f -
Or try this for something more readable.
metadata:
name: hello-world-
generateName: hello-world-
kustomize build | sed -e 's/name: hello-world-/# name: hello-world-/' | kubectl create -f -
The workaround worked in 4.3.0 but stopped working for me in 4.4.0.
i can confirm this
git bisect tells me that this commit is the first one failing:
i searched between kustomize/v4.3.0 (good) and kustomize/v4.4.0 (bad)
f4382738ab1eaddb4fb726a2612c85022143ff7c is the first bad commit
commit f4382738ab1eaddb4fb726a2612c85022143ff7c
Author: Yuwen Ma <yuwenma@google.com>
Date: Thu Sep 16 11:15:05 2021 -0700
[fix 4124] Skip local resource until all transformations have completed.
Resources annotated as "local-config" are expected to be ignored. This skip local resource happens in "accumulateResources" which happens before any transformation operations.
However, the local resource may be needed in transformations.
Thus, this change removes the "drop local resource" logic from accumulateResources and removes these local resource after all transformation operations and var operations are done.
Note:
None of the existing ResMap functions can drop the resource slice easily: "Clear" will ruin the resource order, "AppendAll" only adds non-existing resource, "AbsorbAll" only add or modify but not delete.
Thus, we introduce a new func "Intersection" for resourceAccumulator that specificaly removes the resource by ID and keep the original order.
:040000 040000 3f20f37e2d6424f5f089b83c01fb628d84d47451 0fa90ed08e62b361b8de3163e1c8954e7499a960 M api
Would like to see this fixed as well. Key feature IMO.
/triage accepted /kind bug We will take a closer look to see if we can fix this.
/retitle Kustomize doesn't support metadata.generateName
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I'd like to see this too, the implications of not having this is pretty big.
For example Argo CD will delete the old job + create a new job, but without a unique name tools like Datadog will override the old job with the new job so you lose being able to see the history of the job's runs and how long each one took. With the same name job the best you can do is get container stats like logs which doesn't contain details about the job itself.
Having generateName
supported would allow each job to be individually tracked and persisted since they would each have their own name.
Any updates for that?
The patch workaround works neither for op: move
nor op: remove
.
- op: remove
path: /metadata/name
- op: move
from: /metadata/name
path: /metadata/generateName
I'am also fighting against this issue. it's useful to run a job every time we deploy a new version and we need to generate job names randomly
this is open a long time ago... Any deadline?
Running into this if a job
contains metadata.generateName
, nothing to do with Argo:
# kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- manifests.yaml
---
# manifests.yaml
apiVersion: batch/v1
kind: Job
metadata:
generateName: upgrade-sql-schema # <---- Problem
spec:
template:
spec:
containers:
- name: upgrade-sql-schema
image: nginxinc/nginx-unprivileged
command: ["sleep", "5"]
Error:
kubectl kustomize /path/to/folder
error: accumulating resources: accumulation err='accumulating resources from '../../base': '/workspaces/openshift-app-of-apps/sync-waves-demo/kustomize/base' must resolve to a file': recursed accumulation of path '/workspaces/openshift-app-of-apps/sync-waves-demo/kustomize/base': accumulating resources: accumulation err='accumulating resources from 'manifests.yaml':
missing metadata.name in object {{batch/v1 Job} {{ } map[] map[]}}': got file 'manifests.yaml', but '/workspaces/openshift-app-of-apps/sync-waves-demo/kustomize/base/manifests.yaml' must be a directory to be a root
Workaround:
apiVersion: batch/v1
kind: Job
metadata:
name: upgrade-sql-schema # <---- Workaround
...
^Kustomize works fine.
Demo is pulled from Argo's sync-waves as it depends on generateName
to run Hook jobs, but the issue itself isn't Argo Specific since Kubectl supports jobs
with generateName
but Kustomize does not...
Perhaps a good way for kustomize to deal with generateName
seemlessly would be to:
generateName
and assign the object a random name with the given generateName
prefix, but note that it did (for example, it could save some special annotation on the object).This could probably be implemented fairly quickly in code too since it doesn't break kustomize's assumption of identifying objects by apiversion/kind/name/namespace.
That's definitely an interesting idea. A few complications come to mind:
It appears that the vast majority of us are just trying to generate random names for our jobs. If kustomize just generated the name using generateName as the prefix, most of us will be perfectly happy with it.
I am trying to use
kustomize
with https://github.com/argoproj/argo.Example spec:
Argo Workflow CRDs don't require or use
metadata.name
, but I am getting the following error when I try to runkustomize build
on an Argo Workflow resource:Is there a way for me to override where
kustomize
looks for a name tometadata.generateName
?