currently, pods in the sematic server deployment don't restart when the configmap associated with them changes. on every helm upgrade, helm does rerun the migration pod with the updated configmap values, but the deployment pods themselves stick around unchanged. this is a known issue in k8s: https://github.com/kubernetes/kubernetes/issues/22368
to remove any confusion here, we take a checksum of all of the values in the helm chart and apply the checksum as an annotation on the pods. with this, k8s will forcibly refresh the pods on any change to the helm values. while that is a bit of overkill (not all changes technically require the pods to restart), it is far less error-prone than the status quo.
currently, pods in the sematic server deployment don't restart when the configmap associated with them changes. on every helm upgrade, helm does rerun the migration pod with the updated configmap values, but the deployment pods themselves stick around unchanged. this is a known issue in k8s: https://github.com/kubernetes/kubernetes/issues/22368
to remove any confusion here, we take a checksum of all of the values in the helm chart and apply the checksum as an annotation on the pods. with this, k8s will forcibly refresh the pods on any change to the helm values. while that is a bit of overkill (not all changes technically require the pods to restart), it is far less error-prone than the status quo.
also documented as a helm tip here: https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments