helm / helm

The Kubernetes Package Manager
https://helm.sh
Apache License 2.0
27.12k stars 7.13k forks source link

Deployed Kubernetes resource differs from its manifest in Helm output #13313

Open KisXab opened 2 months ago

KisXab commented 2 months ago

Output of helm version: version.BuildInfo{Version:"v3.15.4", GitCommit:"fa9efb07d9d8debbb4306d72af76a383895aa8c4", GitTreeState:"clean", GoVersion:"go1.22.6"}

Output of kubectl version: Client Version: v1.29.0 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.29.2

Cloud Provider/Platform (AKS, GKE, Minikube etc.): AKS

We deployed an application from Helm chart using “helm install” command and realized an issue in a deployed Kubernetes resource (missing labels we expected to have there). We ran “helm template”, “helm install --dry-run” commands and then deployed with “helm install --debug”. Then we checked deployed Kubernetes resource in the cluster and it was different from the manifest from the screen outputs.

Here is a test example illustrating the issue described above:

lib-chart/Chart.yaml

apiVersion: v2
description: Test Library Helm Chart
name: lib-chart
version: 0.0.1
type: library

lib-chart/templates/_labels.tpl

{{- define "lib-chart.labels" -}}
label1: {{ .Values.global.label1 }}
label2: {{ .Values.global.label2 }}
{{- end }}

app-chart/Chart.yaml

apiVersion: v2
name: app-chart
version: 0.0.1
dependencies:
  - name: lib-chart
    version: 0.0.1
    repository: "file://../lib-chart"

app-chart/values.yaml

(a value for label2 is missing intentionally)

global:
  label1: val1

app-chart/templates/ConfigMap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: configmap
  labels:
    {{- include "lib-chart.labels" . | nindent 4 }}
data:
  mykey: myvalue

Deployment using Helm

Helm-Issue/app-chart>helm upgrade --install test --debug -n test-ns --create-namespace --values=./values.yaml .

history.go:56: [debug] getting history for release test
Release "test" does not exist. Installing it now.
install.go:214: [debug] Original chart version: ""
install.go:231: [debug] CHART PATH: .../Helm-Issue/app-chart

client.go:142: [debug] creating 1 resource(s)
client.go:142: [debug] creating 1 resource(s)
NAME: test
LAST DEPLOYED: Fri Sep  6 08:36:43 2024
NAMESPACE: test-ns
STATUS: deployed
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
global:
  label1: val1

COMPUTED VALUES:
global:
  label1: val1
lib-chart:
  global:
    label1: val1

HOOKS:
MANIFEST:
---
# Source: app-chart/templates/ConfigMap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: configmap
  labels:
    label1: val1
    label2:
data:
  mykey: myvalue

Live Manifest

apiVersion: v1                              
data:                                       
  mykey: myvalue                            
kind: ConfigMap                             
metadata:                                   
  annotations:                              
    meta.helm.sh/release-name: test         
    meta.helm.sh/release-namespace: test-ns 
  creationTimestamp: "2024-09-06T06:36:44Z" 
  labels:                                   
    app.kubernetes.io/managed-by: Helm      
  name: configmap                           
  namespace: test-ns

As you can see, both labels (label1 and label2) present in output-manifest are missing in the live-manifest!

When installing output/rendered manifest using kubectl apply, then both the labels are present in the live manifest.

I.e. deployment using Helm results in differences between an expected state and a live state in a cluster! This should not happen and we would consider this an issue that needs to be fixed!?

We would be grateful for your feedback, opinions and advice and look forward to it.

yardenshoham commented 2 months ago

It's because your configmap looks like:

apiVersion: v1
kind: ConfigMap
metadata:
  name: configmap
  labels:
    label1: val1
    label2: 
data:
  mykey: myvalue

This is not valid YAML.

KisXab commented 2 months ago

@yardenshoham thanks for your answer.

Yes, I can imagine it is a wrong yaml.

But then there are two questions:

  1. Why do we not get a (warning/error) message from helm? Even “helm install –debug” wrote out to do the same as kubectl does (see second question below). The error (missing value for label in values.yaml) was difficult to find and a message could help here.

  2. Why kubectl deploys this manifest (live manifest looks exactly like this in the cluster)? It means, there are no concerns from kubectl and from Kubernetes API !?

gjenkins8 commented 1 month ago

I don't see why label1: val1 is being removed. label2: nil (implicit nil is valid yaml, fwiw) is most likely being dropped by Kubernetes. As to why, I suspect:

https://github.com/helm/helm/issues/13053#issuecomment-2346238985