kubernetes-sigs / kustomize

Customization of kubernetes YAML configurations
Apache License 2.0
10.96k stars 2.24k forks source link

Kustomize doesn't pass namespace to helmCharts, resulting in inconsistent output #5566

Open Skaronator opened 7 months ago

Skaronator commented 7 months ago

What happened?

When using Kustomize with Helm charts, the namespace defined at the top level in the kustomization.yaml file is not correctly propagated to the helmCharts releases. As a result, the generated YAML files contains inconsistent values.

kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

helmGlobals:
  chartHome: ./charts

namespace: the-actual-namespace   # <- defines namespace for all resources

helmCharts:
- name: service
  releaseName: service
  valuesFile: values.yaml

Helm Chart Template

apiVersion: v1
kind: Service
metadata:
  name: the-bug
  namespace: {{ .Release.Namespace }}
  annotations:
    this-service-is-deployed-in-namespace: {{ .Release.Namespace }}

Result

$ kustomize build . --enable-helm
apiVersion: v1
kind: Service
metadata:
  annotations:
    this-service-is-deployed-in-namespace: default # <- {{ .Release.Namespace }} still points to "default"
  name: the-bug
  namespace: the-actual-namespace                  # <- Namespace overwritten by Kustomization to the correct value

This inconsistency can cause issues when deploying resources, such as Bitnami Minio Chart, which relies on the {{ .Release.Namespace }} to dynamically create the service URL.

https://github.com/bitnami/charts/blob/e8999fedbd8e35c96099aa44d3ca4ea947bfbe22/bitnami/minio/templates/distributed/statefulset.yaml#L150

What did you expect to happen?

I expected that the namespace defined in the top level of the kustomization.yaml file would be passed down to all helmCharts releases.

When a Helm chart entry already specifies a namespace, that namespace should take precedence.

How can we reproduce it (as minimally and precisely as possible)?

Repository to reproduce it is available here: https://github.com/Skaronator/kustomize-namespace-bug

$ tree
.
├── charts
│   └── service
│       ├── Chart.yaml
│       ├── templates
│       │   └── service.yaml
│       └── values.yaml
├── kustomization.yaml
├── README.md
└── values.yaml

4 directories, 6 files
# ./kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

helmGlobals:
  chartHome: ./charts

namespace: the-actual-namespace   # <- defines namespace for all resources

helmCharts:
- name: service
  releaseName: service
  valuesFile: values.yaml
# ./charts/service/Chart.yaml
apiVersion: v2
name: service
description: A Helm chart for Kubernetes
type: application
version: 1.0.0
appVersion: "1.0.0"
# ./charts/service/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: the-bug
  namespace: {{ .Release.Namespace }}
  annotations:
    this-service-is-deployed-in-namespace: {{ .Release.Namespace }}

Expected output

$ kustomize build . --enable-helm
apiVersion: v1
kind: Service
metadata:
  annotations:
    this-service-is-deployed-in-namespace: the-actual-namespace # <- {{ .Release.Namespace }} should be correctly set
  name: the-bug
  namespace: the-actual-namespace                               # <- Namespace overwritten by Kustomization to the correct value

Actual output

$ kustomize build . --enable-helm
apiVersion: v1
kind: Service
metadata:
  annotations:
    this-service-is-deployed-in-namespace: default # <- {{ .Release.Namespace }} still points to "default"
  name: the-bug
  namespace: the-actual-namespace                  # <- Namespace overwritten by Kustomization to the correct value

Kustomize version

v5.3.0

Operating system

Linux

koba1t commented 7 months ago

@Skaronator Thanks for reporting an issue.

namespace transformer only changes the namespace field in the k8s resource, And I think helmCharts generates k8s yaml when that is called. This means your helm template is executed and only contains generated k8s yaml when the namespace transformer is called. I think {{ .Release.Namespace }} is only enabled in helm, and Kustomize can't find that namespace place.

So please consider using the helmCharts.namespace field.

helmCharts:
- name: service
  releaseName: service
  valuesFile: values.yaml
  namespace: the-actual-namespace 

/triage out-of-scope

Skaronator commented 7 months ago

Hi @koba1t, thanks for taking your time.

The current implementation causes inconsistent YAML files. Defining the namespace within helmCharts is often not possible. For example, when ArgoCD dynamically sets the namespace in kustomize.

I'd expect if helmCharts.namespace is not defined, kustomize should set that value to the top level namespace.

allandegnan commented 6 months ago

This really seems closer to a bug @koba1t.

Setting the namespace at the kustomize level inflates the helm chart, rewrites all the objects to the namespace, but then misses things like connection strings, thus deploying the chart in a broken state.

Example, the airflow helm chart here:

https://github.com/apache/airflow/blob/main/chart/templates/secrets/metadata-connection-secret.yaml#L24

You can see it relies on {{ .Release.Namespace }} as part of the default connection to its database, thus if you don't match the namespaces between kustomize and the helmChart, then you have something that looks functional but isn't.

panteparak commented 6 months ago

I too agree that this looks to be a bug on Kustomize side. I believe the correct strategy would be

if a namespace is define in kustomization.yml like below

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: the-actual-namespace

helmCharts:
- name: service
  releaseName: service
  valuesFile: values.yaml

kustomize should not transform the namespace, but instead let helm do the transformation.

This would ensure that any built-in variable such as {{.Release.Namespace}} is being rendered correctly. It might be worth to support specifying namespace at helm chart levels too, such that It allows the ability to override namespace to specific chart, like so.

helmCharts:
- name: service
  releaseName: service
  valuesFile: values.yaml
  namespace: the-actual-namespace-2

This looks to be partially related to #4593

aviadhaham commented 5 months ago

Hi, I found a workaround solution, to at least not repeat yourself in the patch resources.

Instead of adding the namespace to the patch, I only added the name of the deployment, and the things I want kustomiza to patch, and it worked.

Ref kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: amp-airflow-prod
helmCharts:
  - name: airflow
    repo: https://charts.bitnami.com/bitnami
    releaseName: amp-airflow
    namespace: amp-airflow-prod
    version: 17.2.4
    valuesFile: values.yml
resources:
  - secrets.yml

patches:
  - path: amp-airflow-exporter-patch.yml
    target:
      kind: Deployment
      name: amp-airflow-exporter
      # namespace: amp-airflow-prod

The patch:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: amp-airflow-exporter
  # namespace: amp-airflow-prod
spec:
  template:
    spec:
      containers:
        - name: airflow-exporter
          env:
            - name: TEST
              value: "test"

The patch was applied to the right resource, and the env var is there :)

ThomasSteinbach commented 5 months ago

I have the exact same behavior with this namespace... https://github.com/ansible/awx-operator/blob/devel/.helm/starter/templates/awx-deploy.yaml#L7

... when deploying it as kustomization with argocd.

QuadmanSWE commented 5 months ago

I get this problem when deploying vernemq helm chart with kustomize through argocd. All of its resources except a service account, a role, and a rolebinding end up in the right place.

https://vernemq.github.io/docker-vernemq

tsloughter commented 4 months ago

namespace transformer only changes the namespace field in the k8s resource, And I think helmCharts generates k8s yaml when that is called. This means your helm template is executed and only contains generated k8s yaml when the namespace transformer is called. I think {{ .Release.Namespace }} is only enabled in helm, and Kustomize can't find that namespace place.

The desire isn't for kustomize to do anything with .Release.Namespace but instead to set a default for helmCharts.namespace to the given namespace. Doing what you suggest:

helmCharts:
- name: service
  releaseName: service
  valuesFile: values.yaml
  namespace: the-actual-namespace 

Means any overlay like:

kind: Kustomization
namespace: other-namespace
...

Will still result in the use of the-actual-namespace in the helm templates.

Skaronator commented 3 months ago

@koba1t

Given the numerous comments and the 13 thumbs up on my original issue, could you please reconsider if it's still out-of-scope and update the labels accordingly?

Thanks

tsloughter commented 3 months ago

On slack someone explained why kustomize can't pass the namespace in the Kustomization when not defined in the helm chart definition but I didn't completely understand. Something around the order things are done.

I also know @koba1t is working on new helm support https://github.com/kubernetes-sigs/kustomize/issues/4401#issuecomment-1981991954 and curious if it'll be done there.

koba1t commented 3 months ago

Hmm.. I am worried that this feature may have a huge change due to this function, which means any transformer output affects another transformer... But I understand many people want to do it.

/kind feature /area helm /triage under-consideration /priority important-longterm

k8s-triage-robot commented 2 weeks ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale