Open cliedeman opened 6 years ago
This is a little trickier because a node can't drain itself currently:
core@default-node-3 ~ $ sudo KUBECONFIG=/etc/kubernetes/kubelet.conf kubectl drain --delete-local-data --force --ignore-daemonsets default-node-3
node/default-node-3 cordoned
error: unable to drain node "default-node-3", aborting command...
There are pending nodes to be drained:
default-node-3
error: daemonsets.extensions "calico-node" is forbidden: User "system:node:default-node-3" cannot get resource "daemonsets" in API group "extensions" in the namespace "kube-system": calico-node-qsvlg; daemonsets.extensions "csi-linode-node" is forbidden: User "system:node:default-node-3" cannot get resource "daemonsets" in API group "extensions" in the namespace "kube-system": csi-linode-node-fwrmv; daemonsets.extensions "kube-proxy" is forbidden: User "system:node:default-node-3" cannot get resource "daemonsets" in API group "extensions" in the namespace "kube-system": kube-proxy-v5qzc; daemonsets.extensions "container-linux-update-agent" is forbidden: User "system:node:default-node-3" cannot get resource "daemonsets" in API group "extensions" in the namespace "reboot-coordinator": container-linux-update-agent-rxl4z
Remote-exec: Local SSH Agent forwarding allows for the node to ssh to the master to issue the command, but the nodes currently don't know the master via terraform variables. The nodes can get the API server using kubectl commands or by parsing the kube config files.
Local-exec: If we can rely on the outputted Kube config file to exist in the terraform workspace after it was initially created, then we could use "local-exec" and the local kubectl commands to drain the nodes.
Don't forget to adapt
export KUBECONFIG=