Open karlschriek opened 1 year ago
I fully support this since I ran into this issue today: https://github.com/kubernetes-sigs/kustomize/issues/3481#issuecomment-1434407293
I have noticed the following changes.
@mgazza What you've described is a separate issue, and is captured by https://github.com/kubernetes-sigs/kustomize/issues/5049, which we have decided to accept.
Create documentation for migrating from deprecated (removed in 5.0.0)
Just so we are on the same page: the patchesStrategicMerge
and patchesJson6902
fields are deprecated in v5, not removed. They will never be removed from the Kustomization v1beta1 type, but at some point, we will create a Kustomization v1 type that will no longer include them. After that (likely years away), we will eventually stop supporting v1beta1. We've announced the deprecation early so that folks will start with and migrate to the newer fields, and report any shortcomings we need to address in them, such as #5049 .
Ideally, the migration path is simple: you run kustomize edit fix
, and your Kustomization is updated for you (#5040 is an outstanding issue with that / an alternative to #5049 ). The edit fix
command is already mentioned in the docs: see the end of the deprecation notice here for example. Is there a particular conversion that is not working for you, or that you are wanting to do manually and unsure how? Generally speaking, you need to add either the patch:
or the path:
key before each existing value, as appropriate based on the content.
/triage needs-information /remove-kind feature /kind documentation
I have noticed the following changes.
- patchesStrategicMerge allowed multiple patches to exist in a single patch file separated by ---, patches doesn't @mgazza
Correct, and when using kustomize edit fix
as suggested by latest v5.0.1, it breaks the kustomize build
,
becuase as noted, patchesStrategicMerge allowed multiple patches..
I guess this is tracked here #5049
looked for a while to do the migration, in my scenario, it's simple, just
from "patchesStrategicMerge:
you can find more on the official website:
https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/patches/
kustomize edit fix
does not work for me. First I don't have kustomize installed, need to use kubectl kustomize edit fix
instead, and that gives error message
error: specify one path to kustomization.yaml
I tried it with kubectl kustomize edit fix overlays/myoverlay
and also with pointing directly the kustomization.yaml - it always gives the same error. Also kubectl kustomize edit fix --help
seems to output a help on kubectl kustomize
instead. So the official error, stating I should use kustomize edit fix
was very much useless for me.
kustomize edit fix
does not work for me. First I don't have kustomize installed, need to usekubectl kustomize edit fix
instead, and that gives error messageerror: specify one path to kustomization.yaml
I tried it with
kubectl kustomize edit fix overlays/myoverlay
and also with pointing directly the kustomization.yaml - it always gives the same error. Alsokubectl kustomize edit fix --help
seems to output a help onkubectl kustomize
instead. So the official error, stating I should usekustomize edit fix
was very much useless for me.
I'm experiencing exactly the same behavior.
@schlichtanders @cyberslot you mention using kustomize bundled with kubectl (kubectl kustomize
) - can you ensure you use kubectl version 1.27+ ? Older versions of kubectl bundled v4 version of the kustomize. You can check with kubectl version
.
@sbocinec Personally, I've tried both ways without success.
k version
Client Version: v1.28.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.1-gke.1066000
kustomize version
v5.1.1
Same for me,
[sam@sam-redhat-laptop copypvc]$ kubectl kustomize edit fix overlays/example/kustomization.yaml
error: specify one path to kustomization.yaml
[sam@sam-redhat-laptop copypvc]$ kubectl kustomize edit fix overlays/example
error: specify one path to kustomization.yaml
[sam@sam-redhat-laptop copypvc]$ kubectl version
Client Version: v1.28.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Same for me,
[sam@sam-redhat-laptop copypvc]$ kubectl kustomize edit fix overlays/example/kustomization.yaml error: specify one path to kustomization.yaml [sam@sam-redhat-laptop copypvc]$ kubectl kustomize edit fix overlays/example error: specify one path to kustomization.yaml
[sam@sam-redhat-laptop copypvc]$ kubectl version Client Version: v1.28.2 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Same issue with same versions
How would one now handle semantically "empty" kustomization.yaml
files with >v5.0.0 such as the following:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
Building a base that references this file yields an error:
deployment/base/resource-quotas': kustomization.yaml is empty
The file and folder hierarchy is still needed for legacy reasons.
How would one now handle semantically "empty"
kustomization.yaml
files with >v5.0.0 such as the following:apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization
Building a base that references this file yields an error:
deployment/base/resource-quotas': kustomization.yaml is empty
The file and folder hierarchy is still needed for legacy reasons.
Have you tried turning this into a component? You could add a dummy image mapping or something like that
I used an empty.yaml and referenced it in
I just used an empty list like so: resources
in kustomization.yaml.resources: []
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
same issue : /
This thread is pretty dead, but figured I'd chime in. I used to have all my strategic merge patches under the patches
key in my kustomization.yml
file. This no longer works w/ the 5.x schema change.
First thing I did was change the key to be patchesStrategicMerge
(yes, I know it is deprecated, but needed for the the edit fix
command to run), otherwise the kustomize edit fix
command would yield:
Error: invalid Kustomization: json: cannot unmarshal string into Go struct field Kustomization.patches of type types.Patch
Once I change the key and run edit fix
, my kustomization.yml
is mutated, but still fails w/ a panic below:
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x40 pc=0x1051fe4b4]
goroutine 1 [running]:
sigs.k8s.io/kustomize/kyaml/yaml.(*RNode).Content(...)
sigs.k8s.io/kustomize/kyaml/yaml/rnode.go:724
sigs.k8s.io/kustomize/kyaml/yaml.(*RNode).getMapFieldValue(0x14000040570?, {0x1053556c4?, 0x140005b89b8?})
sigs.k8s.io/kustomize/kyaml/yaml/rnode.go:437 +0x54
sigs.k8s.io/kustomize/kyaml/yaml.(*RNode).GetApiVersion(...)
sigs.k8s.io/kustomize/kyaml/yaml/rnode.go:419
sigs.k8s.io/kustomize/kyaml/resid.GvkFromNode(0x140001010e0)
sigs.k8s.io/kustomize/kyaml/resid/gvk.go:32 +0x40
sigs.k8s.io/kustomize/api/resource.(*Resource).GetGvk(...)
sigs.k8s.io/kustomize/api/resource/resource.go:57
sigs.k8s.io/kustomize/api/resource.(*Resource).CurId(0x140001010e0)
sigs.k8s.io/kustomize/api/resource/resource.go:462 +0x48
sigs.k8s.io/kustomize/api/resmap.(*resWrangler).GetMatchingResourcesByAnyId(0x140005b8d98?, 0x14000681a40)
sigs.k8s.io/kustomize/api/resmap/reswrangler.go:184 +0xac
sigs.k8s.io/kustomize/api/resmap.demandOneMatch(0x140005b8e98, {{{0x14000e76828, 0x14}, {0x14000e7683d, 0x2}, {0x14000e76840, 0x18}, 0x1}, {0x140018c3100, 0x19}, ...}, ...)
sigs.k8s.io/kustomize/api/resmap/reswrangler.go:227 +0xc8
sigs.k8s.io/kustomize/api/resmap.(*resWrangler).GetById(0x140005b0780?, {{{0x14000e76828, 0x14}, {0x14000e7683d, 0x2}, {0x14000e76840, 0x18}, 0x1}, {0x140018c3100, 0x19}, ...})
sigs.k8s.io/kustomize/api/resmap/reswrangler.go:214 +0x9c
sigs.k8s.io/kustomize/api/internal/builtins.(*PatchTransformerPlugin).transformStrategicMerge(0x2e?, {0x105640800, 0x14001b80ca8})
sigs.k8s.io/kustomize/api/internal/builtins/PatchTransformer.go:112 +0x2dc
sigs.k8s.io/kustomize/api/internal/builtins.(*PatchTransformerPlugin).Transform(0x14001b80ca8?, {0x105640800?, 0x14001b80ca8?})
sigs.k8s.io/kustomize/api/internal/builtins/PatchTransformer.go:87 +0x2c
sigs.k8s.io/kustomize/api/internal/target.(*multiTransformer).Transform(0x140007b5c70?, {0x105640800, 0x14001b80ca8})
sigs.k8s.io/kustomize/api/internal/target/multitransformer.go:30 +0x88
sigs.k8s.io/kustomize/api/internal/accumulator.(*ResAccumulator).Transform(...)
sigs.k8s.io/kustomize/api/internal/accumulator/resaccumulator.go:141
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).runTransformers(0x140007b5c70, 0x14002102c20)
sigs.k8s.io/kustomize/api/internal/target/kusttarget.go:343 +0x1ac
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).accumulateTarget(0x140007b5c70, 0x7?)
sigs.k8s.io/kustomize/api/internal/target/kusttarget.go:237 +0x318
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).AccumulateTarget(0x140007b5c70)
sigs.k8s.io/kustomize/api/internal/target/kusttarget.go:194 +0x104
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).makeCustomizedResMap(0x140007b5c70)
sigs.k8s.io/kustomize/api/internal/target/kusttarget.go:135 +0x68
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).MakeCustomizedResMap(...)
sigs.k8s.io/kustomize/api/internal/target/kusttarget.go:126
sigs.k8s.io/kustomize/api/krusty.(*Kustomizer).Run(0x140005b9b28, {0x10563b7a0, 0x105c25ea8}, {0x10548b5a8, 0x1})
sigs.k8s.io/kustomize/api/krusty/kustomizer.go:90 +0x248
sigs.k8s.io/kustomize/kustomize/v5/commands/build.NewCmdBuild.func1(0x14000314608, {0x0?, 0x14000634880?, 0x10562f4f8?})
sigs.k8s.io/kustomize/kustomize/v5/commands/build/build.go:84 +0x15c
sigs.k8s.io/kustomize/kustomize/v5/commands/edit/fix.RunFix({0x10563b7a0, 0x105c25ea8}, {0x10562f278, 0x14000196048})
sigs.k8s.io/kustomize/kustomize/v5/commands/edit/fix/fix.go:91 +0x1d0
sigs.k8s.io/kustomize/kustomize/v5/commands/edit/fix.NewCmdFix.func1(0x14000207000?, {0x10535268c?, 0x4?, 0x105352690?})
sigs.k8s.io/kustomize/kustomize/v5/commands/edit/fix/fix.go:35 +0x2c
github.com/spf13/cobra.(*Command).execute(0x140002c7b08, {0x105c25ea8, 0x0, 0x0})
github.com/spf13/cobra@v1.8.0/command.go:983 +0x840
github.com/spf13/cobra.(*Command).ExecuteC(0x1400025a608)
github.com/spf13/cobra@v1.8.0/command.go:1115 +0x344
github.com/spf13/cobra.(*Command).Execute(0x105b1bec8?)
github.com/spf13/cobra@v1.8.0/command.go:1039 +0x1c
main.main()
sigs.k8s.io/kustomize/kustomize/v5/main.go:14 +0x20
Turns out, I had to add the target
key to the patch object (in the patches
key (type PatchesPatchPath
)) in order to receive a more appropriate error message. Not sure why the kustomize edit fix
command executes if it creates the above panic...but hopefully this comment helps someone else. My issue then became:
Error: Multiple Strategic-Merge Patches in one `patches` entry is not allowed to set `patches.target` field: [path: "patch/crd-remove.yml"]
So, once i split out my patch file so it was only 1 patch per file, my error went away and everything was gravy (no target
needed once multiple patches in single file was resolved)
> kustomize version
v5.4.2
I had the same problem and solved it as below. I just used the "path" keyword in the "patches" command together
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: argo
resources:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: argo
resources:
If I delete the "path" keyword, the error below occurs.
error: invalid Kustomization: json: cannot unmarshal string into Go struct field Kustomization.patches of type types.Patch
No idea how to use kubectl kustomize edit fix
, I always get this error:
error: specify one path to kustomization.yaml
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Eschewed features
What would you like to have added?
Create documentation that explains how to transition from using
patchesStrategicMerge
andpatchesJson6902
to usingpatches
Why is this needed?
We use
patchesStrategicMerge
extensively. I have read several issues that say that patches is a superset ofpatchesStrategicMerge
andpatchesJson6902
, but I am yet to come across a document that explain how to the exact same outcome using thepatches
directive. Since the old ones have been deprecated, some documentation on how to migrate would be useful.Can you accomplish the motivating task without this feature, and if so, how?
Yes, if someone could tell me here in this issue how use
patches
in order to do what I previously did withpatchesStrategicMerge
What other solutions have you considered?
None.
Anything else we should know?
No response
Feature ownership