kubernetes-sigs / kustomize

Customization of kubernetes YAML configurations
Apache License 2.0
11.05k stars 2.25k forks source link

HelmChartInflationGenerator ignores releaseNamespace #4593

Open in-cloud-opensource opened 2 years ago

in-cloud-opensource commented 2 years ago

Describe the bug

According to the documentation of the helm inflator it shall be possible to set the release namespace of the helm template. When running kustomize build --enable-helm . with the kustomization.ymal below the releaseNamespace is ignored.

Files that can reproduce the issue

Example:

kustomization.yaml

helmChartInflationGenerator:
- chartName: nats
  chartVersion: v0.13.1
  chartRepoUrl: https://nats-io.github.io/k8s/helm/charts
  releaseName: nats
  releaseNamespace: XXXXXX

Expected output

Expected output e.g. for ConfigMap, namesapce set to XXXXXX

apiVersion: v1
data:
  nats.conf: |
    # NATS Clients Port
    port: 4222

    # PID file shared with configuration reloader.
    pid_file: "/var/run/nats/nats.pid"

    ###############
    #             #
    # Monitoring  #
    #             #
    ###############
    http: 8222
    server_name:$POD_NAME
    lame_duck_duration: 120s
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/instance: nats
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nats
    app.kubernetes.io/version: 2.7.2
    helm.sh/chart: nats-0.13.1
  name: nats-config
  namespace: XXXXXX

Actual output

apiVersion: v1
data:
  nats.conf: |
    # NATS Clients Port
    port: 4222

    # PID file shared with configuration reloader.
    pid_file: "/var/run/nats/nats.pid"

    ###############
    #             #
    # Monitoring  #
    #             #
    ###############
    http: 8222
    server_name:$POD_NAME
    lame_duck_duration: 120s
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/instance: nats
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nats
    app.kubernetes.io/version: 2.7.2
    helm.sh/chart: nats-0.13.1
  name: nats-config
  namespace: default
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/instance: nats
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nats
    app.kubernetes.io/version: 2.7.2
    helm.sh/chart: nats-0.13.1
  name: nats
  namespace: default
spec:
  clusterIP: None
  ports:
  - name: client
    port: 4222
  - name: cluster
    port: 6222
  - name: monitor
    port: 8222
  - name: metrics
    port: 7777
  - name: leafnodes
    port: 7422
  - name: gateways
    port: 7522
  selector:
    app.kubernetes.io/instance: nats
    app.kubernetes.io/name: nats
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nats-box
    chart: nats-0.13.1
  name: nats-box
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nats-box
  template:
    metadata:
      labels:
        app: nats-box
    spec:
      containers:
      - command:
        - tail
        - -f
        - /dev/null
        env:
        - name: NATS_URL
          value: nats
        image: natsio/nats-box:0.8.1
        imagePullPolicy: IfNotPresent
        name: nats-box
        resources: null
        volumeMounts: null
      volumes: null
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app.kubernetes.io/instance: nats
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nats
    app.kubernetes.io/version: 2.7.2
    helm.sh/chart: nats-0.13.1
  name: nats
  namespace: default
spec:
  podManagementPolicy: Parallel
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/instance: nats
      app.kubernetes.io/name: nats
  serviceName: nats
  template:
    metadata:
      annotations:
        prometheus.io/path: /metrics
        prometheus.io/port: "7777"
        prometheus.io/scrape: "true"
      labels:
        app.kubernetes.io/instance: nats
        app.kubernetes.io/name: nats
    spec:
      containers:
      - command:
        - nats-server
        - --config
        - /etc/nats-config/nats.conf
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: SERVER_NAME
          value: $(POD_NAME)
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: CLUSTER_ADVERTISE
          value: $(POD_NAME).nats.$(POD_NAMESPACE).svc.cluster.local
        image: nats:2.7.2-alpine
        imagePullPolicy: IfNotPresent
        lifecycle:
          preStop:
            exec:
              command:
              - /bin/sh
              - -c
              - nats-server -sl=ldm=/var/run/nats/nats.pid
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: 8222
          initialDelaySeconds: 10
          periodSeconds: 60
          successThreshold: 1
          timeoutSeconds: 5
        name: nats
        ports:
        - containerPort: 4222
          name: client
        - containerPort: 7422
          name: leafnodes
        - containerPort: 7522
          name: gateways
        - containerPort: 6222
          name: cluster
        - containerPort: 8222
          name: monitor
        - containerPort: 7777
          name: metrics
        resources: {}
        startupProbe:
          failureThreshold: 30
          httpGet:
            path: /
            port: 8222
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        volumeMounts:
        - mountPath: /etc/nats-config
          name: config-volume
        - mountPath: /var/run/nats
          name: pid
      - command:
        - nats-server-config-reloader
        - -pid
        - /var/run/nats/nats.pid
        - -config
        - /etc/nats-config/nats.conf
        image: natsio/nats-server-config-reloader:0.6.2
        imagePullPolicy: IfNotPresent
        name: reloader
        resources: null
        volumeMounts:
        - mountPath: /etc/nats-config
          name: config-volume
        - mountPath: /var/run/nats
          name: pid
      - args:
        - -connz
        - -routez
        - -subz
        - -varz
        - -prefix=nats
        - -use_internal_server_id
        - http://localhost:8222/
        image: natsio/prometheus-nats-exporter:0.9.1
        imagePullPolicy: IfNotPresent
        name: metrics
        ports:
        - containerPort: 7777
          name: metrics
        resources: {}
      shareProcessNamespace: true
      terminationGracePeriodSeconds: 120
      volumes:
      - configMap:
          name: nats-config
        name: config-volume
      - emptyDir: {}
        name: pid
  volumeClaimTemplates: null
---
apiVersion: v1
kind: Pod
metadata:
  annotations:
    helm.sh/hook: test
  labels:
    app.kubernetes.io/instance: nats
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nats
    app.kubernetes.io/version: 2.7.2
    helm.sh/chart: nats-0.13.1
  name: nats-test-request-reply
spec:
  containers:
  - command:
    - /bin/sh
    - -ec
    - |
      nats reply -s nats://$NATS_HOST:4222 'name.>' --command "echo 1" &
    - |
      "&&"
    - |
      name=$(nats request -s nats://$NATS_HOST:4222 name.test '' 2>/dev/null)
    - |
      "&&"
    - |
      [ $name = test ]
    env:
    - name: NATS_HOST
      value: nats
    image: synadia/nats-box
    name: nats-box
  restartPolicy: Never

Kustomize version {Version:kustomize/v4.5.3 GitCommit:de6b9784912a5c1df309e6ae9152b962be4eba47 BuildDate:2022-03-24T20:51:20Z GoOs:linux GoArch:amd64}

Platform Linux

Additional context

It seems that the ReleaseNamespace arguments are missing in the mapping between HelmChartArgs and HelmChart see

func makeHelmChartFromHca(old *HelmChartArgs) (c HelmChart) {
    c.Name = old.ChartName
    c.Version = old.ChartVersion
    c.Repo = old.ChartRepoURL
    c.ValuesFile = old.Values
    c.ValuesInline = old.ValuesLocal
    c.ValuesMerge = old.ValuesMerge
    c.ReleaseName = old.ReleaseName
    return
}
kevin-hanselman commented 2 years ago

This just bit me too. I know "+1" comments are frowned upon, but I think this is a serious enough bug to give this page a bump.

brahama commented 2 years ago

Same happening.

{Version:kustomize/v4.5.5 GitCommit:daa3e5e2c2d3a4b8c94021a7384bfb06734bcd26 BuildDate:2022-05-20T20:21:22Z GoOs:darwin GoArch:amd64}

Helm chart redis from Bitnami.

How to bypass it with kustomize? Basically the replicas fail as they try to connect to master in default namespace instead.

- name: REDIS_MASTER_HOST
              value: {{ template "common.names.fullname" . }}-master-0.{{ template "common.names.fullname" . }}-headless.{{ .Release.Namespace }}.svc.{{ .Values.clusterDomain }}

Cheers

jberger commented 2 years ago

This bug causes kustomize to create an invalid certificate for a webhook in the bonzaicloud kafka operator. It broke a LOT of stuff. A LOT. And because of its location inside of a cert in a secret it was not easy to debug to this so please help the next user out and fix this soon. https://github.com/banzaicloud/koperator/blob/v0.21.2/charts/kafka-operator/templates/operator-deployment-with-webhook.yaml#L7

jberger commented 2 years ago

And now I'm seeing that this isn't a recent issue, from https://github.com/kubernetes-sigs/kustomize/issues/3815 it has been broken for over a year. If this cannot be made to work, at least remove the documentation for the releaseNamespace and extraArgs fields here https://kubectl.docs.kubernetes.io/references/kustomize/builtins/#_helmchartinflationgenerator_

jberger commented 2 years ago

To the future reader, the only mechanism that functions is to use the generator file usage, which is woefully under-documented anyway, then use the file as constituted here in this test https://github.com/kubernetes-sigs/kustomize/blob/1c5393216672c87d5bd553ba255d8dd3044bbf0c/plugin/builtin/helmchartinflationgenerator/HelmChartInflationGenerator_test.go#L24-L31

Create that as a separate file (say helmchart.yaml) in the same path as your kustomize.yaml, then in your kustomize add

generators:
- helmchart.yaml
brahama commented 2 years ago

Yes. Documentation on K8s is especially hard to follow. I removed this generator in favor of helmCharts: that i find it somewhere in an issue like this.. and also did not find too much docs..

jberger commented 2 years ago

I also suspect (but cannot prove, because I don't have a local testing mechanism) that they could fix at least the documented releaseNamespace issue by simply adding one line here https://github.com/kubernetes-sigs/kustomize/blob/1c5393216672c87d5bd553ba255d8dd3044bbf0c/api/types/helmchartargs.go#L120

c.Namespace = old.ReleaseNamespace

extraArgs could not be fixed as easily because it isn't supported in the new form but the namespace is, it is just silently ignored currently

KnVerey commented 2 years ago

The fix suggested sounds plausible and we would appreciate a PR with that + test proving it works.

/triage accepted /good-first-issue

k8s-ci-robot commented 2 years ago

@KnVerey: This request has been marked as suitable for new contributors.

Guidelines

Please ensure that the issue body includes answers to the following questions:

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed by commenting with the /remove-good-first-issue command.

In response to [this](https://github.com/kubernetes-sigs/kustomize/issues/4593): >The fix suggested sounds plausible and we would appreciate a PR with that + test proving it works. > >/triage accepted >/good-first-issue Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
pxwxnvermx commented 2 years ago

/assign

sonali-rajput commented 2 years ago

Is this still getting worked on?

winston0410 commented 2 years ago

I think the issue is still there

toanju commented 2 years ago

It looks like this is still being worked on here

However, @brahama pointed out that helmCharts is the better way to go to. The docs are not up to date on this. However, there is some Chart example. In addition, you might want to look into https://github.com/kubernetes-sigs/kustomize/blob/cf3a452ddd6f83945d39d582243b8592ec627ae3/api/types/helmchartargs.go#L33 for more details on the parameters.

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

schlichtanders commented 1 year ago

To the future reader, the only mechanism that functions is to use the generator file usage, which is woefully under-documented anyway, then use the file as constituted here in this test

https://github.com/kubernetes-sigs/kustomize/blob/1c5393216672c87d5bd553ba255d8dd3044bbf0c/plugin/builtin/helmchartinflationgenerator/HelmChartInflationGenerator_test.go#L24-L31

Create that as a separate file (say helmchart.yaml) in the same path as your kustomize.yaml, then in your kustomize add

generators:
- helmchart.yaml

this does not seem to work for me - can someone confirm that it works for someone else?

ashutosh887 commented 1 year ago

I would like to work on this

/assign

trexx commented 1 year ago

The above methods wouldn't work for me. For example, the fluxcd community helm chart templates is missing the namespaces fields and I couldn't get it populated by either helmchart or HelmChartInflationGenerator. However for some of the other helm charts containing templates with a templated namespace field (such as Cilium using {{ .Release.Namespace }}), I was able to alter the destination namespace.

./kustomization.yaml

resources:
  - ./flux

./flux/kustomization.yaml

namespace: flux

resources:
  - ./namespace.yaml

helmCharts:
  - name: flux2
    namespace: flux
    valuesFile: ./values.yaml
    releaseName: flux
    version: 2.9.2
    repo: https://fluxcd-community.github.io/helm-charts
    IncludeCRDs: true
vaibhav2107 commented 1 year ago

/remove-lifecycle stale

k8s-triage-robot commented 1 month ago

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

jberger commented 1 month ago

Given the number of comments on the more recent #5566 I believe this is still very much an issue.