kubernetes-sigs / kustomize

Customization of kubernetes YAML configurations
Apache License 2.0
10.93k stars 2.24k forks source link

Kustomize doesn't support metadata.generateName #641

Open shimmerjs opened 5 years ago

shimmerjs commented 5 years ago

I am trying to use kustomize with https://github.com/argoproj/argo.

Example spec:

apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: hello-world-
spec:
  entrypoint: whalesay
  templates:
  - name: whalesay
    container:
      image: docker/whalesay:latest
      command: [cowsay]
args: ["hello world"]

Argo Workflow CRDs don't require or use metadata.name, but I am getting the following error when I try to run kustomize build on an Argo Workflow resource:

Error: loadResMapFromBasesAndResources: rawResources failed to read Resources: Missing metadata.name in object {map[args:[hello world] kind:Workflow metadata:map[generateName:hello-world-] spec:map[entrypoint:whalesay templates:[map[container:map[command:[cowsay] image:docker/whalesay:latest] name:whalesay]]] apiVersion:argoproj.io/v1alpha1]}

Is there a way for me to override where kustomize looks for a name to metadata.generateName?

Liujingfang1 commented 5 years ago

Similar issues #627, #586

monopole commented 5 years ago

627 is about names, but currently i see it as a feature request.

This bug and #586 are noting that kustomize doesn't recognize the kubernetes API directive generateName, which is indeed a bug.

This directive is a kustomize-like feature introduced before kustomize... (complicating our life).

We might try to allow it and work with it - or disallow it and provide an alternative mechanism.

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 5 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

confiq commented 5 years ago

/remove-lifecycle rotten

I wanted to use generateName with kustomize but I can't :(

anarcher commented 5 years ago

I wanted to use generateName with kustomize too.

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 4 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 4 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/kustomize/issues/641#issuecomment-560046149): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
wpbeckwith commented 4 years ago

This issue should be reopened unless it has been solved and the docs don't show it.

jarednielsen commented 4 years ago

Agreed, let's re-open and solve the issue.

confiq commented 4 years ago

anybody can open with @k8s-ci-robot robot command. I've already opened, I don't want to flood it :)

haimberger commented 4 years ago

/reopen

I've just stumbled across this issue as well, and would appreciate a fix or an alternative mechanism (as mentioned by monopole above).

k8s-ci-robot commented 4 years ago

@haimberger: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to [this](https://github.com/kubernetes-sigs/kustomize/issues/641#issuecomment-593951721): >/reopen > >I've just stumbled across this issue as well, and would appreciate a fix or an alternative mechanism (as mentioned by monopole [above](https://github.com/kubernetes-sigs/kustomize/issues/641#issuecomment-450532493)). Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
Datamance commented 4 years ago

/reopen

sigh.

k8s-ci-robot commented 4 years ago

@Datamance: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to [this](https://github.com/kubernetes-sigs/kustomize/issues/641#issuecomment-598346427): >/reopen > >sigh. Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
stpierre commented 4 years ago

Can someone with The Power reopen this? Still outstanding AFAICT.

Liujingfang1 commented 4 years ago

/remove-lifecycle rotten

stpierre commented 4 years ago

The workaround I've used for Argo specifically is to define my workflow with a name:

apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  name: hello-world-
spec:
  ...

And then tell Kustomize to move that to generateName as the last patch:

resources:
  - hello-world.yaml
patches:
  - patch: |-
      - op: move
        from: /metadata/name
        path: /metadata/generateName
    target:
      kind: Workflow

This is obviously not very good, but it does let us use Kustomize with Argo (and without writing a Kustomize plugin).

tyuhara commented 4 years ago

I've just faced this issue using Spinnaker as well because Spinnaker is updated to 1.20.0 and Kubernetes Job behavior is changed to use metadata.generateName filed instead of metadata.name. In this case, kustomize build fails for missing metadata.name.

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

jarednielsen commented 4 years ago

/remove-lifecycle stale

posquit0 commented 4 years ago

Is there any progress?

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

hadrien-toma commented 3 years ago

/remove-lifecycle stale

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

hadrien-toma commented 3 years ago

/remove-lifecycle stale

mgwismer commented 3 years ago

The workaround I've used for Argo specifically is to define my workflow with a name:

apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  name: hello-world-
spec:
  ...

And then tell Kustomize to move that to generateName as the last patch:

resources:
  - hello-world.yaml
patches:
  - patch: |-
      - op: move
        from: /metadata/name
        path: /metadata/generateName
    target:
      kind: Workflow

This is obviously not very good, but it does let us use Kustomize with Argo (and without writing a Kustomize plugin).

But we need to change the generateName to name not the other way around. I tried this but still get the same error, metadata.name missing

boonware commented 3 years ago

Is there a progress update on this? It seems like a pretty big limitation. Not being able to us Kustomize with CRDs such as those in Argo Workflows is a huge drawback.

boonware commented 3 years ago

The workaround above does not work. I am using Kustomize v4.2.0. The following error is thrown. My files are shown below.

panic: number of previous names, number of previous namespaces, number of previous kinds not equal

goroutine 1 [running]:
sigs.k8s.io/kustomize/api/resource.(*Resource).PrevIds(...)
    sigs.k8s.io/kustomize/api@v0.8.11/resource/resource.go:345
sigs.k8s.io/kustomize/api/resource.(*Resource).OrgId(0xc0003513b0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
    sigs.k8s.io/kustomize/api@v0.8.11/resource/resource.go:328 +0x17a
sigs.k8s.io/kustomize/api/builtins.(*PrefixSuffixTransformerPlugin).Transform(0xc002a11d40, 0x4851050, 0xc00000f5f0, 0x0, 0x0)
    sigs.k8s.io/kustomize/api@v0.8.11/builtins/PrefixSuffixTransformer.go:52 +0xa9
sigs.k8s.io/kustomize/api/internal/target.(*multiTransformer).Transform(0xc0000d34e8, 0x4851050, 0xc00000f5f0, 0xc002dabaa0, 0x6)
    sigs.k8s.io/kustomize/api@v0.8.11/internal/target/multitransformer.go:30 +0x79
sigs.k8s.io/kustomize/api/internal/accumulator.(*ResAccumulator).Transform(...)
    sigs.k8s.io/kustomize/api@v0.8.11/internal/accumulator/resaccumulator.go:142
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).runTransformers(0xc0002f9040, 0xc000366240, 0x0, 0x0)
    sigs.k8s.io/kustomize/api@v0.8.11/internal/target/kusttarget.go:270 +0x225
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).accumulateTarget(0xc0002f9040, 0xc000366240, 0xc00022f610, 0x4449453, 0x0)
    sigs.k8s.io/kustomize/api@v0.8.11/internal/target/kusttarget.go:195 +0x2b0
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).AccumulateTarget(0xc0002f9040, 0x0, 0xffffffffffffffff, 0x0)
    sigs.k8s.io/kustomize/api@v0.8.11/internal/target/kusttarget.go:156 +0xce
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).makeCustomizedResMap(0xc0002f9040, 0x0, 0x0, 0x0, 0x1)
    sigs.k8s.io/kustomize/api@v0.8.11/internal/target/kusttarget.go:111 +0x2f
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).MakeCustomizedResMap(...)
    sigs.k8s.io/kustomize/api@v0.8.11/internal/target/kusttarget.go:107
sigs.k8s.io/kustomize/api/krusty.(*Kustomizer).Run(0xc0000d3d60, 0x484ea60, 0x4c2c0d8, 0x7ffeefbff96f, 0x1f, 0x0, 0x0, 0x0, 0x0)
    sigs.k8s.io/kustomize/api@v0.8.11/krusty/kustomizer.go:88 +0x3dd
sigs.k8s.io/kustomize/kustomize/v4/commands/build.NewCmdBuild.func1(0xc000320580, 0xc00031b500, 0x1, 0x3, 0x0, 0x0)
    sigs.k8s.io/kustomize/kustomize/v4/commands/build/build.go:80 +0x1c9
github.com/spf13/cobra.(*Command).execute(0xc000320580, 0xc00031b4d0, 0x3, 0x3, 0xc000320580, 0xc00031b4d0)
    github.com/spf13/cobra@v1.0.0/command.go:842 +0x472
github.com/spf13/cobra.(*Command).ExecuteC(0xc000320000, 0x0, 0xffffffff, 0xc00002e0b8)
    github.com/spf13/cobra@v1.0.0/command.go:950 +0x375
github.com/spf13/cobra.(*Command).Execute(...)
    github.com/spf13/cobra@v1.0.0/command.go:887
main.main()
    sigs.k8s.io/kustomize/kustomize/v4/main.go:14 +0x2a

cron-workflow.yaml:

apiVersion: argoproj.io/v1alpha1
kind: CronWorkflow
metadata:
  name: foo
spec:
  entrypoint: main
  templates:
    - name: main
      steps:
        - - name: main
            templateRef:
              name: main
              template: main

kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: argo
resources:
  - cron-workflow.yaml
patches:
  - target:
      kind: CronWorkflow
    path: patch.yaml

patch.yaml:

- op: move
  from: /metadata/name
  path: /metadata/generateName
RobinNagpal commented 2 years ago

Did anyone find a solution to this?

resources:
  - hello-world.yaml
patches:
  - patch: |-
      - op: move
        from: /metadata/name
        path: /metadata/generateName
    target:
      kind: Workflow

used to work for me in 3.8.6 version of kustomize. But stopped working in the latest version i.e. 4.4.0

guillaumBrisard commented 2 years ago

Same, the workaround used to work in 3.9.1 and stopped working with the 3.9.4

natasha41575 commented 2 years ago

/assign

jasonneurohr commented 2 years ago

The workaround worked in 4.3.0 but stopped working for me in 4.4.0.

EarthlingDavey commented 2 years ago

Incase you are using the output of kustomise, you can pipe it through sed like in this blog post: kubernetes generated names

kustomize build | sed -e 's/^ name.*$//g' | kubectl create -f -

Or try this for something more readable.

metadata:
  name: hello-world-
  generateName: hello-world-

kustomize build | sed -e 's/name: hello-world-/# name: hello-world-/' | kubectl create -f -

weh commented 2 years ago

The workaround worked in 4.3.0 but stopped working for me in 4.4.0.

i can confirm this

weh commented 2 years ago

git bisect tells me that this commit is the first one failing:

i searched between kustomize/v4.3.0 (good) and kustomize/v4.4.0 (bad)

f4382738ab1eaddb4fb726a2612c85022143ff7c is the first bad commit
commit f4382738ab1eaddb4fb726a2612c85022143ff7c
Author: Yuwen Ma <yuwenma@google.com>
Date:   Thu Sep 16 11:15:05 2021 -0700

    [fix 4124] Skip local resource until all transformations have completed.

    Resources annotated as "local-config" are expected to be ignored. This skip local resource happens in "accumulateResources" which happens before any transformation operations.
    However, the local resource may be needed in transformations.
    Thus, this change removes the "drop local resource" logic from accumulateResources and removes these local resource after all transformation operations and var operations are done.

    Note:
    None of the existing ResMap functions can drop the resource slice easily: "Clear" will ruin the resource order, "AppendAll" only adds non-existing resource, "AbsorbAll" only add or modify but not delete.
    Thus, we introduce a new func "Intersection" for resourceAccumulator that specificaly removes the resource by ID and keep the original order.

:040000 040000 3f20f37e2d6424f5f089b83c01fb628d84d47451 0fa90ed08e62b361b8de3163e1c8954e7499a960 M  api
claywd-x commented 2 years ago

Would like to see this fixed as well. Key feature IMO.

natasha41575 commented 2 years ago

/triage accepted /kind bug We will take a closer look to see if we can fix this.

natasha41575 commented 2 years ago

/retitle Kustomize doesn't support metadata.generateName

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

mfn commented 2 years ago

/remove-lifecycle stale

nickjj commented 2 years ago

I'd like to see this too, the implications of not having this is pretty big.

For example Argo CD will delete the old job + create a new job, but without a unique name tools like Datadog will override the old job with the new job so you lose being able to see the history of the job's runs and how long each one took. With the same name job the best you can do is get container stats like logs which doesn't contain details about the job itself.

Having generateName supported would allow each job to be individually tracked and persisted since they would each have their own name.

fabiohbarbosa commented 2 years ago

Any updates for that?

The patch workaround works neither for op: move nor op: remove.

- op: remove
  path: /metadata/name
- op: move
  from: /metadata/name
  path: /metadata/generateName
ricardo-s-ferreira-alb commented 2 years ago

I'am also fighting against this issue. it's useful to run a job every time we deploy a new version and we need to generate job names randomly

this is open a long time ago... Any deadline?

mdrakiburrahman commented 2 years ago

Running into this if a job contains metadata.generateName, nothing to do with Argo:

# kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- manifests.yaml
---
# manifests.yaml
apiVersion: batch/v1
kind: Job
metadata:
  generateName: upgrade-sql-schema # <---- Problem
spec:
  template:
    spec:
      containers:
        - name: upgrade-sql-schema
          image: nginxinc/nginx-unprivileged
          command: ["sleep", "5"]

Error:

kubectl kustomize /path/to/folder
error: accumulating resources: accumulation err='accumulating resources from '../../base': '/workspaces/openshift-app-of-apps/sync-waves-demo/kustomize/base' must resolve to a file': recursed accumulation of path '/workspaces/openshift-app-of-apps/sync-waves-demo/kustomize/base': accumulating resources: accumulation err='accumulating resources from 'manifests.yaml': 

missing metadata.name in object {{batch/v1 Job} {{ } map[] map[]}}': got file 'manifests.yaml', but '/workspaces/openshift-app-of-apps/sync-waves-demo/kustomize/base/manifests.yaml' must be a directory to be a root

Workaround:

apiVersion: batch/v1
kind: Job
metadata:
  name: upgrade-sql-schema # <---- Workaround
...

^Kustomize works fine.

Demo is pulled from Argo's sync-waves as it depends on generateName to run Hook jobs, but the issue itself isn't Argo Specific since Kubectl supports jobs with generateName but Kustomize does not...

yanniszark commented 2 years ago

Perhaps a good way for kustomize to deal with generateName seemlessly would be to:

  1. Detect generateName and assign the object a random name with the given generateName prefix, but note that it did (for example, it could save some special annotation on the object).
  2. Deal with the object as with all others.
  3. Before returning the output, do a final pass and remove the name from all resources that have the special annotation set.

This could probably be implemented fairly quickly in code too since it doesn't break kustomize's assumption of identifying objects by apiversion/kind/name/namespace.

KnVerey commented 2 years ago

That's definitely an interesting idea. A few complications come to mind:

  1. Resources can be added from several different sources, and each entrypoint would need to handle the conversion (probably doable)
  2. Name references can't work with generateName (we don't have the real name, and we can't possibly get it), but with this solution, Kustomize's transformers would do incorrect and potentially confusing things with it. For example, the temporary internal name could show up in object references, or be targeted by replacements.
  3. Plugin transformers would be exposed to the workaround, i.e. they would receive objects with both fields populated and need to know what's up with that. In other words, the annotation would need to become part of the KRM Functions standard. I'm very reluctant to do that, since the reason for it is Kustomize internals.
dardosordi commented 2 years ago

It appears that the vast majority of us are just trying to generate random names for our jobs. If kustomize just generated the name using generateName as the prefix, most of us will be perfectly happy with it.