kubernetes-sigs / kustomize

Customization of kubernetes YAML configurations
Apache License 2.0
11.03k stars 2.25k forks source link

Kustomize doesn't apply `nameSuffix` to `replacements.source.name` #5442

Closed karlvr closed 1 month ago

karlvr commented 12 months ago

What happened?

nameSuffix does not appear to apply to replacements.source.name, so the source of my replacement was not the expected resource, and the output was the name without the suffix applied.

What did you expect to happen?

I expect nameSuffix to apply to the replacements.source.name as I want the source to match the resource that was created by this kustomization.

How can we reproduce it (as minimally and precisely as possible)?

kustomization.yml:

---
resources:
  - res1
nameSuffix: -test

res1/kustomization.yml:

---
resources:
  - service.yml
  - deployment.yml
replacements:
  - source:
      name: test
      kind: Service
      version: v1
    targets:
      - fieldPaths:
          - spec.template.spec.containers.0.env.0.value
        select:
          group: apps
          kind: Deployment
          name: test
          version: v1

res1/service.yml:

apiVersion: v1
kind: Service
metadata:
  name: service
spec:
  type: ClusterIP
  clusterIP: None

res1/deployment.yml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      service: abc
  template:
    metadata:
      labels:
        service: abc
    spec:
      containers:
      - name: test
        env:
        - name: REDIS_DOCKER_SERVICE_NAME
          value: REDIS_NAME_PLACEHOLDER
        image: example

Expected output

apiVersion: v1
kind: Service
metadata:
  name: service-test
spec:
  clusterIP: None
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-test
spec:
  replicas: 1
  selector:
    matchLabels:
      service: abc
  template:
    metadata:
      labels:
        service: abc
    spec:
      containers:
      - env:
        - name: REDIS_DOCKER_SERVICE_NAME
          value: service-test
        image: example
        name: test

Actual output

apiVersion: v1
kind: Service
metadata:
  name: service-test
spec:
  clusterIP: None
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-test
spec:
  replicas: 1
  selector:
    matchLabels:
      service: abc
  template:
    metadata:
      labels:
        service: abc
    spec:
      containers:
      - env:
        - name: REDIS_DOCKER_SERVICE_NAME
          value: service
        image: example
        name: test

Kustomize version

v5.2.1

Operating system

MacOS

Mlundm commented 12 months ago

This is similar to another issue. Please see my answer there

https://github.com/kubernetes-sigs/kustomize/issues/5429#issuecomment-1810249254

natasha41575 commented 11 months ago

The nameSuffix is not getting applied because the replacements is running in the base kustomizations, while the nameSuffix is happening in an overlay. That means that the replacement is happening first, and then the nameSuffix transformer gets run later. That is why the nameSuffix doesn't get propagated. To fix this, you will have to move your replacements into the overlay. If you have multiple overlays, you may be able to leverage components to try to reduce duplicating your replacements definitions.

Another option you have is to add the path spec.template.spec.env.name to your name reference transformer configurations if you want it to apply to all such fields.

We have read your comment in https://github.com/kubernetes-sigs/kustomize/issues/5429#issuecomment-1810249254, specifically:

Replacements only takes into account the resources in the same kustomization and not above.

This is intentional, as vars broke the kustomize design philosophy of each base/overlay layer being an independent step in the pipeline.

If none of these explanations help you, could you please elaborate on your use case and your confusion so that we have more information to better help you?

/kind support /triage needs-information

Mlundm commented 11 months ago

Yea that was my intention to inform them about how it worked differently to Vars.

Thanks for clarifying!

wallrj commented 11 months ago

I agree with @karlvr. But in my case, I was surprised when the namespace transformer didn't apply to replacements.source.namespace.

My use-case is that I want to add a Service and a cert-manager Certificate resource in my base/ directory and have the Certificate.spec.dnsNames derived from the Service.metadata.name and Service.metadata.namespace.

@natasha41575 How can I accomplish this?

$ tree
.
├── base
│   ├── certificate.yaml
│   ├── kustomization.yaml
│   └── service.yaml
└── overlays
    ├── production
    │   └── kustomization.yaml
    └── staging
        └── kustomization.yaml

4 directories, 5 files
# base/certificate.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: www
spec:
  dnsNames:
    - SERVICE_NAME.SERVICE_NAMESPACE.svc
    - SERVICE_NAME.SERVICE_NAMESPACE.svc.cluster.local
  secretName: www-tls
  issuerRef:
    name: issuer-1
    kind: ClusterIssuer
# base/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: www
  namespace: default
spec: {}
# base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- service.yaml
- certificate.yaml

replacements:
- source:
    name: www
    kind: Service
    version: v1
    fieldPath: metadata.name
  targets:
  - select:
      name: www
      kind: Certificate
      group: cert-manager.io
    fieldPaths:
    - spec.dnsNames.*
    options:
      delimiter: .
      index: 0
- source:
    name: www
    kind: Service
    version: v1
    fieldPath: metadata.namespace
  targets:
  - select:
      name: www
      kind: Certificate
      group: cert-manager.io
    fieldPaths:
    - spec.dnsNames.*
    options:
      delimiter: .
      index: 1
# overlays/staging/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- ../../base

namespace: staging
# overlays/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- ../../base

namespace: production

Output

# $ kustomize build overlays/production/
apiVersion: v1
kind: Service
metadata:
  name: www
  namespace: production
spec: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: www
  namespace: production
spec:
  dnsNames:
  - www.default.svc # ❗ Wrong namespace
  - www.default.svc.cluster.local # ❗ Wrong namespace
  issuerRef:
    kind: ClusterIssuer
    name: issuer-1
  secretName: www-tls
karlvr commented 11 months ago

@natasha41575 thank you for your explanation, that's very clear, and on that basis I think this issue is closed, except that it might be really handy is replacements could work in the way described! It felt to me like the overlay nameSuffix etc applied to everything else as it worked its way through the kustomizations, and replacements felt like it was misbehaving. If we're barking up the wrong tree I'll close up this issue.

k8s-triage-robot commented 8 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

jamesmw-anz commented 7 months ago

I have exactly the same use case as @wallrj - using the namespaces for certs. I also use namePrefix from an overlay and would like this to propagate, as well as the namespaces. I'm trying to migrate some older Kustomize where this is implemented as vars. I cannot migrate with this regression (even if it is intended).

Should the namespace, namePrefix/Suffix transformers not also work through the replacements? It's not reasonable to suggest that the replacements should be moved up to the overlay level.

/remove-lifecycle stale

k8s-triage-robot commented 6 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

ArshiAAkhavan commented 6 months ago

@wallrj @jamesmw-anz, i am facing the exact same issue regarding certificate and name-transformers.

what is the final design and/or solution you came up with to mitigate this issue?

/remove-lifecycle stale

Mlundm commented 6 months ago

@ArshiAAkhavan

If you still want to keep the replacement in the base, then I have no solutions.. But if your goal is to reduce duplicating the replacements in your overlays then wrapping it in a component is one way.

There is one problem with components and replacements for this case and that is the order of evaluation with components is

resources -> components -> transformers (namespace, replacements etc)

This means that the replacement in the component will not take into account the namespace transformer in the overlay.

Example that will not work

resources:
- ../../base

namespace: production

components:
- ../../components/cert-replacement

And so you have to also wrap the namespace transformer into a component and apply it before the replacement that you want.

Example that works

resources:
- ../../base

components:
- ../../components/namespaces/production
- ../../components/cert-replacement

Its not very pretty but it works.

ArshiAAkhavan commented 6 months ago

@Mlundm thanks it worked

k8s-triage-robot commented 3 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 month ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/kustomize/issues/5442#issuecomment-2405537625): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.