current behavior, using docker image kubeaws/kube-spot-termination-notice-handler:1.13.7-1 on EKS v1.19 cluster
WARNING: the server could not find the requested resource: fluentd-f4pl7, jaeger-agent-9vxmg, aws-node-ppmbh, k8s-spot-termination-handler-vxdhc, kube-proxy-tzlc5, node-local-dns-vxw6b, node-problem-detector-tzlxg, prometheus-node-exporter-q94mz; Deleting pods with local storage: ...
current drain process will restart all daemonsets on node, and all pods that currently on draining node will not work correctly
solution - update kubectl to corresponding version, changes also reduce docker layers in filnal image
fix daemonsets pods to be restarted
current behavior, using docker image
kubeaws/kube-spot-termination-notice-handler:1.13.7-1
on EKS v1.19 clustercurrent drain process will restart all daemonsets on node, and all pods that currently on draining node will not work correctly
solution - update kubectl to corresponding version, changes also reduce docker layers in filnal image
FYI - in you using helm chart stable/k8s-spot-termination-handler - it's also not compatible with EKS v1.19 - needs to fix https://github.com/helm/charts/blob/master/stable/k8s-spot-termination-handler/templates/clusterrole.yaml to supports resource "daemonsets" in API group "apps"