kubernetes-sigs / kustomize

Customization of kubernetes YAML configurations
Apache License 2.0
10.88k stars 2.24k forks source link

patches replace op doesn't work as patchesJson6902 with path: /metadata/namespace #5108

Closed pufffikk closed 1 month ago

pufffikk commented 1 year ago

What happened?

We are using patchesJson6902, but in near future it will be deprecated, so we decided to change it to patches. We have the following code:

patchesJson6902:
  - patch: |-
      - op: replace
        path: /metadata/namespace
        value: new_value
    target:
      kind: KafkaTopic
      version: v1beta2
      name: .*

It works correct and replace all namespaces we need with new_value. For example here we have such result:

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
  namespace: new_value

Try to replace it with patches using kustomize edit fix and we get:

patches:
- patch: |-
    - op: replace
      path: /metadata/namespace
      value: new_value
  target:
    kind: KafkaTopic
    name: .*
    version: v1beta2

Try to build it, and all namespaces stay the same. But if we try to replace for example path: /kind it works ok.

For example here we have such result:

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
  namespace: old_value

What did you expect to happen?

Expect the same behavior for patches and patchesJson6902, so in the example above should get

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
  namespace: new_value

How can we reproduce it (as minimally and precisely as possible)?

# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- resources.yaml

patches:
- patch: |-
    - op: replace
      path: /metadata/namespace
      value: new_value
  target:
    kind: ConfigMap
    name: .*
# resources.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: test-object
  namespace: test-namespace
data:
  placeholder: data

Expected output

apiVersion: v1
kind: ConfigMap
metadata:
  name: test-object
  namespace: new_value
data:
  placeholder: data

Actual output

apiVersion: v1
kind: ConfigMap
metadata:
  name: test-object
  namespace: test-namespace
data:
  placeholder: data

Kustomize version

5.0.1

Operating system

MacOS

cailynse commented 1 year ago

Hello @pufffikk!

I'm not able to reproduce this using the file specifications provided above, on MacOC with kustomize version 5.0.1 Screenshot 2023-04-10 at 3 05 21 PM

Can you provide me more information about your specific setup? Did you run this with just the files used in your reproduction and still get the namespace unchanged?

cailynse commented 1 year ago

/triage not-reproducible

fgc-64 commented 1 year ago

Hello, I am facing likely the same issue.

I do have a "namespace: test_namespace" in the kustomization.yaml, then I see that patchesJson6902 changes the namespace to "new_value", while patch is not effective.

What could be the best approach to preserve the namespace for some resources?

Fabio.

Alegrowin commented 1 year ago

Same here, I used the kustomize edit fix and now the namespace replacement is not working anymore. Running on linux and inside devcontainer in vscode.

pufffikk commented 1 year ago

update kustomization.yaml file to reproduce error, if in kustomization file we have namespace: test-namespace, patch doesn't change namespace in output file. Other files could stay the same.

# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: test-namespace
resources:
- resources.yaml

patches:
- patch: |-
    - op: replace
      path: /metadata/namespace
      value: new_value
  target:
    kind: ConfigMap
    name: .*
pufffikk commented 1 year ago

Hello @cailynse please have a look at last comment

mydoomfr commented 1 year ago

Same here

$ kustomize version
v5.0.3
# kustomization.yaml
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: velero
resources:
  - release.yaml
patches:
  - target:
      group: ""
      version: v2beta1
      kind: HelmRelease
      name: velero
    patch: |-
      - op: replace
        path: /metadata/namespace
        value: flux-system

The namespace is overridden by the Kustomization, and patches is not working as expected.

$ kustomize build .
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: velero
  namespace: velero
spec:
  chart:
    spec: ...

And using patchesJson6902 instead of patches , it works as expected

$ kustomize build .
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: velero
  namespace: flux-system
spec:
  chart:
    spec: ...
Timoses commented 1 year ago

Same here: kustomize version v5.1.0.

When using patchesJson6902 one is able to overwrite namespace: definition in a resulting resource. Using patches instead does not work (namespace: definition will persist).

What is actually intended? I personally would say that overwriting the namespace in patches is odd, as specifying namespace: ... in a kustomization.yaml specifies the intention to deploy resources to a specific namespaces. Using patchesJson6902 is a hack at the least.

Macgregorian commented 11 months ago

I get the same results with more resources where patchesJson6902 works and using kustomize edit fix does not convert to a working solution.

husira commented 7 months ago

We also have exact the same behaviour where patchesJson6902 uses the namespace from the op: replace value: velero

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: test-namespace
patchesJson6902:
  # places backup schedule into velero namespace
  - target:
      kind: Schedule
      name: argocd-scheduled
      version: v1
    patch: |-
      - op: replace
        path: /metadata/namespace
        value: velero

output:

apiVersion: velero.io/v1
kind: Schedule
metadata:
  labels:
    app: argocd
    app.kubernetes.io/instance: argocd
    argocd.argoproj.io/instance: argocd
  name: argocd-scheduled
  namespace: velero
spec:
  schedule: 5 1 * * *
  template:
    defaultVolumesToRestic: true
    hooks: {}
    includeClusterResources: true
    includedNamespaces:
    - argocd
    storageLocation: xyz
    ttl: 120h0m0s

If we change the patchesJson6902 to patches, we get the namespace which is defined in the kustomization.yaml namespace: test-namespace

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: test-namespace
patchesJson6902:
  # places backup schedule into velero namespace
  - target:
      kind: Schedule
      name: argocd-scheduled
      version: v1
    patch: |-
      - op: replace
        path: /metadata/namespace
        value: velero

output:

apiVersion: velero.io/v1
kind: Schedule
metadata:
  labels:
    app: argocd
    app.kubernetes.io/instance: argocd
    argocd.argoproj.io/instance: argocd
  name: argocd-scheduled
  namespace: argocd
spec:
  schedule: 5 1 * * *
  template:
    defaultVolumesToRestic: true
    hooks: {}
    includeClusterResources: true
    includedNamespaces:
    - argocd
    storageLocation: xyz
    ttl: 120h0m0s

To me it looks like the order with patches is different compared to patchesJson6902. It looks like, when all patches are applied, the resource is patched again with the "namespace" definition from the main kustomization.yaml. If we remove the namespace: test-namespace we get the correct namespace: velero

Is this behavior intended @cailynse ?

saydulaev commented 7 months ago

Hello @cailynse The same issue

$ kustomize version
v5.2.1

To reproduce

$ tree kustomize
kustomize
├── base
 │   ├── cadvisor
 │    │   └── kustomization.yaml
└── overlays
    ├── prod
     │   ├── cadvisor
     │    │   └── kustomization.yaml

Where base/kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://github.com/google/cadvisor/deploy/kubernetes/base?ref=v0.48.1

The overlays/prod/kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: metrics-system

resources:
- ../../../base/cadvisor

patches:
- patch: |- 
    $patch: delete
    apiVersion: v1
    kind: Namespace
    metadata:
      name: cadvisor
      labels:
        app: cadvisor
- target:
    group: apps
    version: v1
    kind: DaemonSet
    name: cadvisor 
    labelSelector: app=cadvisor
  patch: |-
    - op: replace
      path: /metadata/name
      value: "cadvisor-2"
    - op: replace
      path: /spec/template/spec/containers/0/name
      value: "cadvisor-2"
    - op: replace
      path: /spec/template/spec/serviceAccountName
      value: "cadvisor-2"

As a result the metadata.name, spec.template.spec.containers[0].name and spec.template.spec.serviceAccountName - not changed.

If I change my overlay overlays/prod/kustomization.yaml to form like below it will work as expected.

patchesJSON6902:
- target:
    group: apps
    version: v1
    kind: DaemonSet
    name: cadvisor
    labelSelector: app=cadvisor
  patch: |-
    - op: replace
      path: /metadata/name
      value: "cadvisor-2"
    - op: replace
      path: /spec/template/spec/containers/0/name
      value: "cadvisor-2"
    - op: replace
      path: /spec/template/spec/serviceAccountName
      value: "cadvisor-2"
kustomize build overlays/prod/cadvisor
# Warning: 'patchesJson6902' is deprecated. Please use 'patches' instead. Run 'kustomize edit fix' to update your Kustomization automatically.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: cadvisor-2
  namespace: metrics-system
spec:
   ...
   containers:
      - image: gcr.io/cadvisor/cadvisor:v0.45.0
        name: cadvisor-2
        ...
   serviceAccountName: cadvisor-2
husira commented 7 months ago

/triage not-reproducible

it is reproducible, any updates on this @cailynse ?

k8s-triage-robot commented 3 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

nicovak commented 2 months ago

Same issue on my side, any update?

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 month ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/kustomize/issues/5108#issuecomment-2250319259): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
tosan88 commented 2 weeks ago

Not sure if anyone is/will be in the same situation as us, but we had this issue not realising that we had to update the patch target name as we used namePrefix in the kustomization file. With patchesJson6902 it worked the replacement with the target name containing the namePrefix, while with patches it didn't anymore.

So this is how our kustomization.yaml file changed: Before:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
...
namePrefix: example-
patchesJson6902:
  - target:
      kind: PersistentVolumeClaim
      name: example-files-claim
      version: v1
    path: replace-and-add-pvc-data.yaml

After:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
...
namePrefix: example-
patches:
- path: replace-and-add-pvc-data.yaml
  target:
    kind: PersistentVolumeClaim
    name: files-claim
    version: v1
fgc-64 commented 2 weeks ago

As pointed out by husira to change a namespace only for specific targets ( and without using patchesJson6902) we would need to remove namespace in the kustomization, and apply individual patches for the namespace(s) we need.

apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization ... namespace: default # <- remove this row ... patches:

The first patch applies the default namespace everywhere, the second one only to the custom resources I need. It works for me at least. ;-)