Closed skuda closed 7 years ago
the autoscaler was running on this node it seems, as there was a pod named calling-eagle-acs-engine-autoscaler-2908112416-p9vg5
that got evicted from the node. That's why the node was marked as undrainable.
Any user/system pod that is not replicated (except kube-proxy) will cause the node it's on to be marked undrainable to avoid disruptions in your cluster.
ahh ok, that explains the blocking of the other two nodes, they were running pods without replication too. Thanks!
Hi,
I found this state while testing today:
Right now all the load could be handled by 2 nodes like autoscaler correctly detects, but it doesn't cordon and drain the unneeded nodes. I just tried to do it manually myself:
is the reason for that "under-utilized-undrainable" that the nodes had running the kube-proxy daemonset? Thanks!
Miguel.