kubernetes-sigs / kustomize

Customization of kubernetes YAML configurations
Apache License 2.0
11.09k stars 2.26k forks source link

Kustomize Helm plugin not work with nested charts #5742

Open narcislinux opened 3 months ago

narcislinux commented 3 months ago

What happened?

Hi,

I am trying to deploy NewRelic nri-bundle using Kustomize+Helm Charts instead of Kustomize+manifests. When I try to use Kustomize with Helm, the output from kustomize build is incomplete, and some parts are not rendered.

What did you expect to happen?

Can you guide me on whether I am making a mistake somewhere or if there is an issue with Kustomize?

How can we reproduce it (as minimally and precisely as possible)?

this is my configuration for kustomization.yaml:

# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

helmCharts:
  - name: nri-bundle 
    repo: https://helm-charts.newrelic.com
    releaseName: nri-bundle
    namespace: newrelic
    version: latest
    valuesFile: values.yaml

helmGlobals:
  chartHome: charts

And values.yaml file for both helm teplate and Kustomize is:

newrelic-infrastructure:
  enabled: true

nri-prometheus:
  enabled: false

nri-metadata-injection:
  enabled: true

kube-state-metrics:
  enabled: true
  image:
    tag: v2.10.11
  serviceAccount:
    create: true
    name: ""
    imagePullSecrets:
      - name: image-pull-secrets

nri-kube-events:
  enabled: true

newrelic-logging:
  enabled: true

newrelic-pixie:
  enabled: false

pixie-chart:
  enabled: false

newrelic-infra-operator:
  enabled: false

newrelic-prometheus-agent:
  enabled: true

newrelic-k8s-metrics-adapter:
  enabled: false

global:
  cluster: test
  licenseKey: "*****************"
  insightsKey: ""
  customSecretName: "secret"
  customSecretLicenseKey: "newrelic_license"
  labels: {}
  podLabels: {}
  images:
    registry: ""
    pullSecrets: []
  serviceAccount:
    annotations: {}
    create:
    name:
  hostNetwork:
  dnsConfig: {}
  priorityClassName: ""
  podSecurityContext: {}
  containerSecurityContext: {}
  affinity: {}
  nodeSelector: {}
  tolerations: []
  customAttributes: {}
  lowDataMode: true
  privileged: true
  fargate:
  proxy:  
  nrStaging:
  fedramp:
    enabled: 
  verboseLog:

And this is Helm chart directory structure:

 charts
├── nri-bundle
│   ├── Chart.yaml
│   ├── charts
│   │   ├── kube-state-metrics
│   │   │   ├── Chart.yaml
│   │   │   ├── templates
│   │   │   └── values.yaml
│   │   ├── newrelic-infrastructure
│   │   │   ├── Chart.yaml
│   │   │   ├── templates
│   │   │   └── values.yaml
│   │   ├── newrelic-logging
│   │   │   ├── Chart.yaml
│   │   │   ├── README.md
│   │   │   ├── templates
│   │   │   └── values.yaml
│   │   ├── nri-kube-events
│   │   │   ├── Chart.yaml
│   │   │   ├── templates
│   │   │   └── values.yaml
│   │   ├── nri-metadata-injection
│   │   │   ├── Chart.yaml
│   │   │   ├── templates
│   │   │   └── values.yaml
│   │   └── nri-prometheus
│   │       ├── Chart.yaml
│   │       ├── templates
│   │       └── values.yaml
│   ├── requirements.lock
│   ├── requirements.yaml
│   └── values.yaml

Expected output

This is my output when I use a helm template command:

helm template newrelic-bundle newrelic/nri-bundle -f values.yaml > Helm-ouput.txt

Actual output

And this is my kustomize build output when I use Helm: kustomize build . --enable-helm > kustomize-output.txt

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: newrelic-infrastructure
    chart: newrelic-infrastructure-0.13.33
    heritage: Helm
    release: nri-bundle
  name: nri-bundle-newrelic-infrastructure
  namespace: newrelic
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/instance: nri-bundle
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nri-metadata-injection
    app.kubernetes.io/version: 1.2.0
    helm.sh/chart: nri-metadata-injection-1.0.1
  name: nri-bundle-nri-metadata-injection
  namespace: newrelic
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app: newrelic-infrastructure
    chart: newrelic-infrastructure-0.13.33
    heritage: Helm
    mode: privileged
    release: nri-bundle
  name: nri-bundle-newrelic-infrastructure
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - nodes/metrics
  - nodes/stats
  - nodes/proxy
  - pods
  - services
  - secrets
  verbs:
  - get
  - list
- nonResourceURLs:
  - /metrics
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/instance: nri-bundle
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nri-metadata-injection
    app.kubernetes.io/version: 1.2.0
    helm.sh/chart: nri-metadata-injection-1.0.1
  name: nri-bundle-nri-metadata-injection
rules:
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - mutatingwebhookconfigurations
  verbs:
  - get
  - create
  - patch
- apiGroups:
  - certificates.k8s.io
  resources:
  - certificatesigningrequests
  verbs:
  - create
  - get
  - delete
- apiGroups:
  - certificates.k8s.io
  resources:
  - certificatesigningrequests/approval
  verbs:
  - update
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - create
  - get
  - patch
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - get
- apiGroups:
  - certificates.k8s.io
  resourceNames:
  - kubernetes.io/legacy-unknown
  resources:
  - signers
  verbs:
  - approve
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app: newrelic-infrastructure
    chart: newrelic-infrastructure-0.13.33
    heritage: Helm
    mode: privileged
    release: nri-bundle
  name: nri-bundle-newrelic-infrastructure
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nri-bundle-newrelic-infrastructure
subjects:
- kind: ServiceAccount
  name: nri-bundle-newrelic-infrastructure
  namespace: newrelic
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/instance: nri-bundle
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nri-metadata-injection
    app.kubernetes.io/version: 1.2.0
    helm.sh/chart: nri-metadata-injection-1.0.1
  name: nri-bundle-nri-metadata-injection
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nri-bundle-nri-metadata-injection
subjects:
- kind: ServiceAccount
  name: nri-bundle-nri-metadata-injection
  namespace: newrelic
---
apiVersion: v1
data:
  license: KioqKioqKioqKioqKioqKio=
kind: Secret
metadata:
  labels:
    app: newrelic-infrastructure
    chart: newrelic-infrastructure-0.13.33
    heritage: Helm
    mode: privileged
    release: nri-bundle
  name: nri-bundle-newrelic-infrastructure-config
  namespace: newrelic
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/instance: nri-bundle
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nri-metadata-injection
    app.kubernetes.io/version: 1.2.0
    helm.sh/chart: nri-metadata-injection-1.0.1
  name: nri-bundle-nri-metadata-injection
  namespace: newrelic
spec:
  ports:
  - port: 443
    targetPort: 8443
  selector:
    app.kubernetes.io/name: nri-metadata-injection
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/instance: nri-bundle
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nri-metadata-injection
    app.kubernetes.io/version: 1.2.0
    helm.sh/chart: nri-metadata-injection-1.0.1
  name: nri-bundle-nri-metadata-injection
  namespace: newrelic
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: nri-metadata-injection
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: nri-bundle
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: nri-metadata-injection
        app.kubernetes.io/version: 1.2.0
        helm.sh/chart: nri-metadata-injection-1.0.1
    spec:
      containers:
      - env:
        - name: clusterName
          value: test
        image: newrelic/k8s-metadata-injection:1.2.0
        imagePullPolicy: IfNotPresent
        name: nri-metadata-injection
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 1
          periodSeconds: 1
        resources:
          limits:
            memory: 80M
          requests:
            cpu: 100m
            memory: 30M
        volumeMounts:
        - mountPath: /etc/tls-key-cert-pair
          name: tls-key-cert-pair
      serviceAccountName: nri-bundle-nri-metadata-injection
      volumes:
      - name: tls-key-cert-pair
        secret:
          secretName: nri-bundle-nri-metadata-injection
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: newrelic-infrastructure
    chart: newrelic-infrastructure-0.13.33
    heritage: Helm
    mode: privileged
    release: nri-bundle
  name: nri-bundle-newrelic-infrastructure
  namespace: newrelic
spec:
  selector:
    matchLabels:
      app: newrelic-infrastructure
      release: nri-bundle
  template:
    metadata:
      labels:
        app: newrelic-infrastructure
        mode: privileged
        release: nri-bundle
    spec:
      containers:
      - env:
        - name: NRIA_LICENSE_KEY
          valueFrom:
            secretKeyRef:
              key: license
              name: nri-bundle-newrelic-infrastructure-config
        - name: CLUSTER_NAME
          value: test
        - name: ETCD_TLS_SECRET_NAMESPACE
          value: default
        - name: NRIA_DISPLAY_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        - name: NRK8S_NODE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        - name: NRIA_CUSTOM_ATTRIBUTES
          value: '{"clusterName":"$(CLUSTER_NAME)"}'
        - name: NRIA_PASSTHROUGH_ENVIRONMENT
          value: KUBERNETES_SERVICE_HOST,KUBERNETES_SERVICE_PORT,CLUSTER_NAME,CADVISOR_PORT,NRK8S_NODE_NAME,KUBE_STATE_METRICS_URL,KUBE_STATE_METRICS_POD_LABEL,TIMEOUT,ETCD_TLS_SECRET_NAME,ETCD_TLS_SECRET_NAMESPACE,API_SERVER_SECURE_PORT,KUBE_STATE_METRICS_SCHEME,KUBE_STATE_METRICS_PORT,SCHEDULER_ENDPOINT_URL,ETCD_ENDPOINT_URL,CONTROLLER_MANAGER_ENDPOINT_URL,API_SERVER_ENDPOINT_URL,DISABLE_KUBE_STATE_METRICS
        image: newrelic/infrastructure-k8s:1.21.0
        imagePullPolicy: IfNotPresent
        name: newrelic-infrastructure
        resources:
          limits:
            memory: 300M
          requests:
            cpu: 100m
            memory: 150M
        securityContext:
          privileged: true
        volumeMounts:
        - mountPath: /dev
          name: dev
        - mountPath: /var/run/docker.sock
          name: host-docker-socket
        - mountPath: /var/log
          name: log
        - mountPath: /host
          name: host-volume
          readOnly: true
      dnsPolicy: ClusterFirstWithHostNet
      hostNetwork: true
      serviceAccountName: nri-bundle-newrelic-infrastructure
      tolerations:
      - effect: NoSchedule
        operator: Exists
      - effect: NoExecute
        operator: Exists
      volumes:
      - hostPath:
          path: /dev
        name: dev
      - hostPath:
          path: /var/run/docker.sock
        name: host-docker-socket
      - hostPath:
          path: /var/log
        name: log
      - hostPath:
          path: /
        name: host-volume
  updateStrategy:
    type: RollingUpdate
---
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/instance: nri-bundle
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nri-metadata-injection
    app.kubernetes.io/version: 1.2.0
    helm.sh/chart: nri-metadata-injection-1.0.1
  name: nri-bundle-nri-metadata-injection-job
  namespace: newrelic
spec:
  backoffLimit: 1
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: nri-bundle
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: nri-metadata-injection
        app.kubernetes.io/version: 1.2.0
        helm.sh/chart: nri-metadata-injection-1.0.1
    spec:
      containers:
      - args:
        - --service
        - nri-bundle-nri-metadata-injection
        - --webhook
        - nri-bundle-nri-metadata-injection
        - --secret
        - nri-bundle-nri-metadata-injection
        - --namespace
        - newrelic
        command:
        - ./generate_certificate.sh
        image: newrelic/k8s-webhook-cert-manager:1.2.1
        imagePullPolicy: IfNotPresent
        name: nri-metadata-injection-job
      restartPolicy: Never
      serviceAccountName: nri-bundle-nri-metadata-injection
---
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
  labels:
    app.kubernetes.io/instance: nri-bundle
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nri-metadata-injection
    app.kubernetes.io/version: 1.2.0
    helm.sh/chart: nri-metadata-injection-1.0.1
  name: nri-bundle-nri-metadata-injection
webhooks:
- clientConfig:
    caBundle: ""
    service:
      name: nri-bundle-nri-metadata-injection
      namespace: newrelic
      path: /mutate
  failurePolicy: Ignore
  name: metadata-injection.newrelic.com
  rules:
  - apiGroups:
    - ""
    apiVersions:
    - v1
    operations:
    - CREATE
    resources:
    - pods

Kustomize version

v5.1.1

Operating system

MacOS

k8s-ci-robot commented 3 months ago

This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
k8s-triage-robot commented 2 weeks ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale