helm / helm

The Kubernetes Package Manager
https://helm.sh
Apache License 2.0
26.8k stars 7.08k forks source link

Helm not updating Kubernetes objects properly #5915

Closed thomastaylor312 closed 3 years ago

thomastaylor312 commented 5 years ago

Many users are still reporting issues with manifests being updated with a new value, but then those changes do not get applied to the actual Kubernetes objects. A common theme seems to be when you change a value from A to B and then B to A, things do not update correctly. However, there seem to be varied circumstances for when this happens and it has been hard to duplicate consistently for the core maintainers

This issue is meant to consolidate is #1844 and #3933.

thomastaylor312 commented 5 years ago

As a note, I have taken this on and am trying to debug it based on all the examples provided in other issues

thomastaylor312 commented 5 years ago

If anyone has a chart they can share that reliably has this issue, please do

erismaster commented 5 years ago

https://github.com/mattermost/mattermost-helm/tree/master/charts/mattermost-team-edition

Making a change to the configJSON section of values.yaml (Which generates a configmap), does not apply new values without deleting the configmap prior to running helm upgrade --install mattermost -f values.yaml stable/mattermost-team-edition

thomastaylor312 commented 5 years ago

Awesome. Thank you @erismaster I'll give it a whirl

thomastaylor312 commented 5 years ago

@erismaster I can't seem to reproduce with that chart. I have tried modifying both the chart values.yaml and passed an extra values file. I also have used those files to switch back and forth between values. Each time, it updates the Secret object without a problem. In my case, I was changing the email settings with

configJSON:
  EmailSettings:
    FeedbackEmail: foo@bar.com
    FeedbackName: Foobar
    FeedbackOrganization: Foobar

Is there something else I should be modifying to make it work?

erismaster commented 5 years ago

@thomastaylor312 I tried to reproduce myself again and am not able to repro anymore. Perhaps the bug was in the helm chart itself and not with helm. The chart was at this tag when I was experiencing the issue last.

thomastaylor312 commented 5 years ago

@erismaster Hmmm...doing the same thing as before and it is still working ok. Can you still reproduce the issue with that chart version? Also, what version of Helm and k8s are you using?

I am running k8s 1.14.2 with helm 2.13.1 (which is the latest version someone reported seeing the issue with)

erismaster commented 5 years ago

@thomastaylor312 When I reported the bug in the previous issue it was against v1.14.1 (this cluster is on prem deployed with kubespray). Helm at that time was

helm version Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4",GitTreeState:"clean"}``

Todays test was minikube v1.14.2, Helm

Client: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}

erismaster commented 5 years ago

@thomastaylor312 I can confirm that on the existing cluster the issue is no longer present. I was able to modified a value and have it successfully update the secret. Helm has been upgraded from v2.13.1 -> v2.14.0 From what I can deduce it would seem the Helm update from v2.13 to 2.14 (either server or client), has resolved this issue.

thomastaylor312 commented 5 years ago

@erismaster Thanks for digging into it! Glad it is working for you, I'll keep trying to dig around to see if I can duplicate it

peay commented 5 years ago

@thomastaylor312 not quite sure if this is related, but I couldn't find clear documentation on the topic/expected behavior:

Could this be related to what you are discussing here?

edit: https://github.com/helm/helm/issues/2070#issuecomment-284839395 says a bit more, but I am not sure whether this is up to date. Something in the official docs on expected behavior wrt manual edits may help

hickeyma commented 5 years ago

@thomastaylor312 Some feedback that might be relevant:

thomastaylor312 commented 5 years ago

@peay Yep, that comment is correct. We are working on a new merge strategy for Helm 3 stuff

@hickeyma re: the patch API, my thinking is something along those lines as well. Most people were reporting problems with Deployment and ConfigMap objects though, which are patch-able. Let me take a look at that example and give it a whirl

hickeyma commented 5 years ago

@thomastaylor312 Sure, just wanted to add about cases which are not patchable. This adds to the mis-understanding.

Issue #1811 might also be worth a look. If applicable, maybe add details to this and close out that issue?

pakhomov-passteam commented 5 years ago

+1 same issue with my chart's configmap

thomastaylor312 commented 5 years ago

@pakhomov-passteam do you have a chart I can use to duplicate? I've been trying for weeks and cannot duplicate the issue

haywoood commented 5 years ago

Just to add, I'm currently running into this issue where one of my SecretClaims (and subsequent container environment variables) is not reflected in the deployed yaml. --debug and helm template show the correct output, but the env var is left off. I can't delete this deployment, unfortunately.

The SecretClaim does exist in k get secret -n my-ns

thomastaylor312 commented 5 years ago

@haywoood What are your Helm and Kubernetes versions?

haywoood commented 5 years ago

@thomastaylor312

Helm: 2.8.2
Kubernetes:
  client: 1.14.2
  server: 1.11.10-gke.5

The secret in question lives in a list of secret definitions in values.yaml, which are then iterated over and placed in the manifest. I tried hardcoding this secret into my template, but no dice. It's hard to pin down what exactly is different about this secret, as it's apart of a larger list of secrets that exist no problem.

thomastaylor312 commented 5 years ago

@haywoood Hmmm...that is an older version of Helm and Kubernetes. It seems like most people aren't seeing the issue with the latest version of Helm and k8s. Makes it kinda hard to see what caused it

MWilkin12 commented 5 years ago

@thomastaylor312 I am seeing the same issue - kubernetes (v1.11.7), helm (v2.12.2)

We have found that when we are doing an upgrade on a failed release if environment variables exist with the same value as the successfully deployed release then they are not being passed into the container. For example: This a test example

Revision 1: Helm get values (successfully deployed release)

  ENVIRONMENT: *****
  LIST: *****
  BUCKET: *****
  NAME: *****
  URL: *****
  ENV: *****

Revision 2: Helm get values no config variables (causes the deployment to fail) as expected

Revision 3: Helm get values (failed release)

  ENVIRONMENT: *****
  LIST: *****
  BUCKET: *****
  NAME: *****
  URL: *****
  ENV: *****

Revision 3: Kubernetes Deployment

  - env:
    - name: LIST
      value: *****
    - name: BUCKET
      value: *****
    - name: NAME
      value: *****
    - name: URL
      value: *****
    - name: ENV
      value: ***** 

as you can see, ENVIRONMENT not being passed (which was the same value) in to kubernetes deployment which causes it to fail

MWilkin12 commented 5 years ago

I have replicated the above on kubernetes (v1.11.7), helm (v2.14.2)

hickeyma commented 5 years ago

@MWilkin12 Helm currently uses Kubernetes 1.15. Would you be able to upgrade to version near to this?

MWilkin12 commented 5 years ago

Unfortunately not, we are using KOPS and the latest version we can go to is v1.12.x

diegosoek commented 5 years ago

Same error here:

Error: configmaps "app.job-config.beta-app" already exists

Helm version: v2.14.1 Kube version: Client: v1.14.0 Server: v1.13.7-gke.8

My ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: app.job-config.beta-app
  labels:
    app: app
    chart: app-0.1.1
    release: beta
    heritage: Tiller
  annotations:
    "helm.sh/hook": "pre-install,pre-upgrade"
    "helm.sh/hook-weight": "-10"
    "helm.sh/hook-delete-policy": "hook-succeeded"
data:
thomastaylor312 commented 5 years ago

@diegosoek I am not sure if that is the same issue. The behavior we've seen is that the release "updates" but the values in an object are not actually updated

tecnobrat commented 5 years ago

I can confirm we saw this with:

Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}

We changed the docker image from for example: service:a then: service:b and then back to: service:a

It never went back to service:a and stayed at service:b, even tho if I do a helm get release-name I get a chart that has service:a in it. the deployment is still showing service:b

tecnobrat commented 5 years ago

We also cannot go to 1.14 yet since we're on AWS EKS which doesn't yet support 1.14.

tecnobrat commented 5 years ago

I followed up with the k8s slack and was told that we can run helm 2.14.x on our 1.13 cluster. I upgraded helm and I was able to repro it.

I went from a => b => a and the image never changed from b => a, even tho helm get says b.

Helm:

Client: &version.Version{SemVer:"v2.14.2", GitCommit:"a8b13cc5ab6a7dbef0a58f5061bcc7c0c61598e7", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.2", GitCommit:"a8b13cc5ab6a7dbef0a58f5061bcc7c0c61598e7", GitTreeState:"clean"}

K8s:

Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.7", GitCommit:"4683545293d792934a7a7e12f2cc47d20b2dd01b", GitTreeState:"clean", BuildDate:"2019-06-06T01:46:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.7-eks-c57ff8", GitCommit:"c57ff8e35590932c652433fab07988da79265d5b", GitTreeState:"clean", BuildDate:"2019-06-07T20:43:03Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

So it doesn't appear that helm 2.14 fixes it, perhaps its a k8s version thing?

thomastaylor312 commented 5 years ago

It very well could be a k8s issue as well @tecnobrat. Thanks for the repro steps and I'll give it a try

gnalsa commented 5 years ago

I am hitting this issue as well with one of our complex internal charts. I am unable to reproduce the issue with a simple chart. I will keep trying to reproduce the issue, but I guess it is caused by some specific combination of resources.

running v1.11.5 k8s and v3.0.0-alpha.2 helm Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.5", GitCommit:"753b2dbc622f5cc417845f0ff8a77f539a4213ea", GitTreeState:"clean", BuildDate:"2018-11-26T14:31:35Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} version.BuildInfo{Version:"v3.0.0-alpha.2", GitCommit:"97e7461e41455e58d89b4d7d192fed5352001d44", GitTreeState:"clean", GoVersion:"go1.12.7"}

thomastaylor312 commented 5 years ago

@gnalsa Please let me know if you do find something! The new three way merge should definitely prevent this kind of stuff from happening

tuantranf commented 5 years ago

@thomastaylor312 I'm not quite sure if it may help for someone I have the same issue on

(env) ➜ git:(develop) ✗ helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
(env) ➜  git:(develop) ✗ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.10-eks-2e569f", GitCommit:"2e569fd887357952e506846ed47fc30cc385409a", GitTreeState:"clean", BuildDate:"2019-07-25T23:13:33Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}

Running helm upgrade did not update my configmap so i tried a work around with adding a label with timestamp

  labels:
    timestamp: "{{ .Release.Time.Seconds }}"

and it works for me.

winromulus commented 5 years ago

The timestamp workaround did not work for me. Unless I delete the configmap and apply it again, helm upgrade does nothing.

helm version

Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}

kubectl version

Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.7", GitCommit:"4683545293d792934a7a7e12f2cc47d20b2dd01b", GitTreeState:"clean", BuildDate:"2019-06-06T01:39:30Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
sfitts commented 5 years ago

For us the key seems to be editing the config map (outside of helm). I understand that helm won't do a 3 way merge (yet), but I would expect it to overwrite any manual changes with the config map it generates and that's not what we see.

winromulus commented 5 years ago

I don't understand how this is a 3-way merge for us (it does not seem to be the same issue). We created a config map via helm, we update it via helm but the update fails. There are no processes or devs that touch that config map. And I agree with @sfitts, the expectation is that helm will not do merges but ensure that what we pass in as the newest value gets applied.

bacongobbler commented 5 years ago

FYI Helm performs a 3-way merge as of Helm 3.0.0-beta.1, @sfitts. More info here: https://v3.helm.sh/docs/faq/#improved-upgrade-strategy-3-way-strategic-merge-patches

With that, Helm should now overwrite manual changes assuming the chart changes that field. That FAQ goes over a few scenarios we tested with the change.

lucwillems commented 4 years ago

i have similar issues with changes in configmaps which are not propagated. in my case, i use a configMap to inject a configuration + setup script into a pre-upgrade hook , to be run before the upgrade. while working on the script i noticed that any change i do in the script (embedded in configMap) was not updated on the k8s cluster this is the way i run it :

the script current ends with error (exit 1) by design so we can check/validate its working before going forward with upgrading the pods.

i used random annotations & labels on the configMap / Hook job but no result . work around : uninstall / install which is not a option in production environment.

i'm using : version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}

thomastaylor312 commented 4 years ago

@lucwillems is the ConfigMap also annotated as a hook? If so is it weighted such that it gets updated first? It could also be that you don't have the "helm.sh/hook-delete-policy": before-hook-creation policy set if it is a pre-create/pre-upgrade

tendant commented 4 years ago

Had similar issue.

Helm version: version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}

Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.10-eks-aae39f", GitCommit:"aae39f4697508697bf16c0de4a5687d464f4da81", GitTreeState:"clean", BuildDate:"2019-12-23T08:19:12Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

Added below annotation in the deployment of a helm chart. After helm upgrade, release version increases by 1, however the pod stays the same as old one without new annotations.

+  annotations:
+    prometheus.io/scrape: "true"
+    prometheus.io/path: /metrics
+    prometheus.io/port: "8080"
tendant commented 4 years ago

Had similar issue.

Helm version: version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}

Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.10-eks-aae39f", GitCommit:"aae39f4697508697bf16c0de4a5687d464f4da81", GitTreeState:"clean", BuildDate:"2019-12-23T08:19:12Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

Added below annotation in the deployment of a helm chart. After helm upgrade, release version increases by 1, however the pod stays the same as old one without new annotations.

+  annotations:
+    prometheus.io/scrape: "true"
+    prometheus.io/path: /metrics
+    prometheus.io/port: "8080"

False report. Looks like annotations are added in deployment instead of pod, however I was checking it in pod. The deployment was actually correct.

I will still keep it here, in case someone else might have similar issue.

austinorth commented 4 years ago

Having an issue where a deployment shows as being updated correctly with a helm upgrade, but the pods are not updated because the ReplicaSet is not updated correctly by the deployment. Then the deployment refreshes and has its old values in place. Super weird, and I'm still digging around to determine what's causing this, but figured I'd mention here as this was the most relevant issue I could find on the topic.

woile commented 4 years ago

I'm having a similar issue, I have a normal chart, I update Chart.yaml:version, Chart.yaml:appVersion and the values.yaml:tag and the pod is not recreated. I'm using pullPolicy: Always. The deployment is updated.

$ helm version
version.BuildInfo{Version:"v3.3.1", GitCommit:"249e5215cde0c3fa72e27eb7a30e8d55c9696144", GitTreeState:"clean", GoVersion:"go1.14.7"}
nimishajn commented 3 years ago

I see behaviour where my configmap gets deleted everytime when I run helm upgrade command and then gets created again. Is this expected behaviour? This way, my upgrade completely depends upon configmap creation as the new pods would refer it and if configmap creation step fails, my pods would never start.

This is the hooks I have attached to my configmap:

annotations: "helm.sh/hook": pre-install,pre-upgrade "helm.sh/hook-weight": "-1" "helm.sh/hook-delete-policy": before-hook-creation

This is my chart structure : mychart/ ├── charts | ├── subcharts.... ├── Chart.yaml ├── templates │   ├── configmap.yaml │   └── pod.yaml └── values.yaml

bacongobbler commented 3 years ago

"helm.sh/hook-delete-policy": before-hook-creation

Yes. That hook deletion policy deletes and re-creates the object if it exists.

nimishajn commented 3 years ago

Even if I don't apply "helm.sh/hook-delete-policy": before-hook-creation label to my configmap resource, the helm upgrade command still deletes the existing map and recreates it. Is there a way I can avoid deleting the existing configmap and only update it?

github-actions[bot] commented 3 years ago

This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs.

kymtwyf commented 3 years ago

? is this issue fixed ?

rajha-korithrien commented 3 years ago

While I don't think this is a solution for everyone experiencing this problem. For me, the symptoms described in this issue were caused by user error. Specifically a poorly maintained .helmignore file.

I use vim which creates files that end in ~ as a backup of what the editor is working on. Helm will quite happily render these files, which creates a duplicate of whatever the original file renders, but at the previous state of the file.

For my chart I am working on notice the listing of files rendered:

[rajha@voyager charts]$ helm template -f asterisk-answers.yaml asterisk/ | grep Source
# Source: asterisk/templates/serviceaccount.yaml
# Source: asterisk/templates/tls-secret.yaml
# Source: asterisk/templates/configmap.yaml
# Source: asterisk/templates/configmap.yaml~
# Source: asterisk/templates/service-sip.yaml
# Source: asterisk/templates/service-web.yaml
# Source: asterisk/templates/services-rtp.yaml
# Source: asterisk/templates/deployment.yaml
# Source: asterisk/templates/ingress.yaml
# Source: asterisk/templates/tests/test-connection.yaml

What ends up in Kubernetes is the contents of configmap.yaml~ even though what I just edited and want is configmap.yaml.

Placing the proper pattern in the .helmignore file for my chart causes helm to ignore these backup files and this makes the issue disappear.

github-actions[bot] commented 3 years ago

This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs.