kube-aws / kube-spot-termination-notice-handler

A Kubernetes DaemonSet to gracefully delete pods 2 minutes before an EC2 Spot Instance gets terminated
Apache License 2.0
378 stars 77 forks source link

EKS v1.19 support #50

Closed maksim-paskal closed 2 years ago

maksim-paskal commented 3 years ago

fix daemonsets pods to be restarted

current behavior, using docker image kubeaws/kube-spot-termination-notice-handler:1.13.7-1 on EKS v1.19 cluster

WARNING: the server could not find the requested resource: fluentd-f4pl7, jaeger-agent-9vxmg, aws-node-ppmbh, k8s-spot-termination-handler-vxdhc, kube-proxy-tzlc5, node-local-dns-vxw6b, node-problem-detector-tzlxg, prometheus-node-exporter-q94mz; Deleting pods with local storage: ...

current drain process will restart all daemonsets on node, and all pods that currently on draining node will not work correctly

solution - update kubectl to corresponding version, changes also reduce docker layers in filnal image

FYI - in you using helm chart stable/k8s-spot-termination-handler - it's also not compatible with EKS v1.19 - needs to fix https://github.com/helm/charts/blob/master/stable/k8s-spot-termination-handler/templates/clusterrole.yaml to supports resource "daemonsets" in API group "apps"

maksim-paskal commented 2 years ago

move to aws-node-termination-handler