Closed gilesknap closed 2 months ago
In order to see that I have made a fix I want to first be able to reproduce the issue.
I tried the following:
ec deploy bl01t-ea-test-01 2024.3.1
into my own namespace. ec ps
shows correct version and runningkubectl describe po bl01t-ea-test-01 > file1
ioc-instances
: change {{ .Release.Name }}-config
to {{ .Release.Name }}-config1
bl01t-ea-test-01 2024.3.2
also with {{ .Release.Name }}-config1
ec deploy bl01t-ea-test-01 2024.3.2
into my own namespace. ec ps
shows correct version and runningkubectl describe po bl01t-ea-test-01 > file2
file1
and file2
-> Only change is bl01t-ea-test-01-config1 as hoped.kubectl get po
only shows one instance of bl01t-ea-test-01-0
services/bl01t-ea-test-01/config/ioc.yaml
"description: Generic instance for testing generic IOCs" to "description: Rather generic instance for testing generic IOCs"bl01t-ea-test-01 2024.3.3
ec deploy bl01t-ea-test-01 2024.3.3
into my own namespace. ec ps
shows correct version and runningkubectl describe cm bl01t-ea-test-01-config
-> Shows "description: Rather generic instance for testing generic IOCs"@gilesknap was hoping to see the same behaviour you described but no luck.
Is it possible your pod has an issue? "Note that, even though the StatefulSet controller will not proceed to update the next Pod until its ordinal successor is Running and Ready, it will restore any Pod that fails during the update to that Pod's existing version." https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
Should we move to "OnDelete" update strategy and do the explicit delete as you have suggested?
Hmmm. I have repeatedly seen pods not being restarted when I only update the ioc.yaml.
I should have said for both of these issues I'm having this is with local-deploy. I'm not entirely convinced that that should make a difference given that each local deploy creates a different helm package version. But if the yaml that describes the deployment has not changed I could see why K8S would think it not necessary to restart the pod.
If its for a local deploy I believe that file changes not being stored is a potential cause? I have had a weird one in the past where doing a save in a remote vscode session did not actually seem to be immediately storing the changes
Frequently i deploy a new version of an IOC and the old pod hangs around. One reason for this is that the only update is to the config map and K8S does not see that as a reason to restart the pod with the same container image.
I also see statefulsets hold on to an old version of a pod after they get updated. not sure why.
Let's add an explicit delete of the existing pod after doing a deploy (or local deploy)
This will fix the issue for sure.
@marcelldls can you take a look at this please?