Open KisXab opened 2 months ago
It's because your configmap looks like:
apiVersion: v1
kind: ConfigMap
metadata:
name: configmap
labels:
label1: val1
label2:
data:
mykey: myvalue
This is not valid YAML.
@yardenshoham thanks for your answer.
Yes, I can imagine it is a wrong yaml.
But then there are two questions:
Why do we not get a (warning/error) message from helm? Even “helm install –debug” wrote out to do the same as kubectl does (see second question below). The error (missing value for label in values.yaml) was difficult to find and a message could help here.
Why kubectl deploys this manifest (live manifest looks exactly like this in the cluster)? It means, there are no concerns from kubectl and from Kubernetes API !?
I don't see why label1: val1
is being removed. label2: nil
(implicit nil
is valid yaml, fwiw) is most likely being dropped by Kubernetes. As to why, I suspect:
https://github.com/helm/helm/issues/13053#issuecomment-2346238985
Output of
helm version
: version.BuildInfo{Version:"v3.15.4", GitCommit:"fa9efb07d9d8debbb4306d72af76a383895aa8c4", GitTreeState:"clean", GoVersion:"go1.22.6"}Output of
kubectl version
: Client Version: v1.29.0 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.29.2Cloud Provider/Platform (AKS, GKE, Minikube etc.): AKS
We deployed an application from Helm chart using “helm install” command and realized an issue in a deployed Kubernetes resource (missing labels we expected to have there). We ran “helm template”, “helm install --dry-run” commands and then deployed with “helm install --debug”. Then we checked deployed Kubernetes resource in the cluster and it was different from the manifest from the screen outputs.
Here is a test example illustrating the issue described above:
lib-chart/Chart.yaml
lib-chart/templates/_labels.tpl
app-chart/Chart.yaml
app-chart/values.yaml
(a value for label2 is missing intentionally)
app-chart/templates/ConfigMap.yaml
Deployment using Helm
Live Manifest
As you can see, both labels (label1 and label2) present in output-manifest are missing in the live-manifest!
When installing output/rendered manifest using kubectl apply, then both the labels are present in the live manifest.
I.e. deployment using Helm results in differences between an expected state and a live state in a cluster! This should not happen and we would consider this an issue that needs to be fixed!?
We would be grateful for your feedback, opinions and advice and look forward to it.