kubernetes-sigs / cluster-api-provider-packet

Cluster API Provider Packet (now Equinix Metal)
https://deploy.equinix.com/labs/cluster-api-provider-packet/
Apache License 2.0
99 stars 41 forks source link

Control Plane rolling update stall with EIP #163

Closed jhead-slg closed 1 week ago

jhead-slg commented 4 years ago

cluster-api v0.3.7 capp v0.3.2 packet-ccm v1.1.0

I am hitting an issue with cluster-api's ability to roll the control plane nodes. This appears to be because of how we bind the EIP to the node via the lo:0 interface and how cluster-api tears down the node's etcd instance before fully draining the workloads running on the node.

To reproduce, bring up a 1 node control plane with a 1 node worker node and then edit the KubeadmControlPlane's kubeadmConfigSpec changing the postKubeadmCommands to include an echo or other innocuous addition.

The new control plane node will begin deployment and eventually come into service. At some point, cluster-api kills the etcd of the older control plane node and the Packet CCM EIP health check moves the EIP to the new node. Once the etcd goes away, the kube-apiserver panics and the node stalls being unable to reach the EIP which is bound locally with no running kube-apiserver.

After several minutes, on the new control plane you can see various pods stuck in Terminating and/or Pending state. The cluster-api will not progress past this point.

# k get -A pods -o wide | grep -v Running
NAMESPACE        NAME                                               READY   STATUS        RESTARTS   AGE   IP              NODE                             NOMINATED NODE   READINESS GATES
core             cert-manager-webhook-69c8965665-49cfh              1/1     Terminating   0          11h   240.0.18.144    k8s-game-cp-1d5ce5-6wnjj         <none>           <none>
kube-system      cilium-operator-7597b4574b-bg94f                   1/1     Terminating   0          11h   10.66.5.5       k8s-game-cp-1d5ce5-6wnjj         <none>           <none>
kube-system      cilium-operator-7597b4574b-nlbjw                   0/1     Pending       0          20m   <none>          <none>                           <none>           <none>
kube-system      cilium-sjtk8                                       0/1     Pending       0          28m   <none>          <none>                           <none>           <none>
kube-system      coredns-66bff467f8-jznm9                           1/1     Terminating   0          11h   240.0.18.145    k8s-game-cp-1d5ce5-6wnjj         <none>           <none>
kube-system      coredns-66bff467f8-s77cv                           0/1     Pending       0          20m   <none>          <none>                           <none>           <none>
topolvm-system   controller-7d85c6bbbc-8ps5q                        0/5     Pending       0          20m   <none>          <none>                           <none>           <none>
topolvm-system   controller-7d85c6bbbc-ppvvz                        5/5     Terminating   0          11h   240.0.18.12     k8s-game-cp-1d5ce5-6wnjj         <none>           <none>

To get things moving again you have to go onto the old control plane node and ip addr del <EIP>/32 dev lo. Once this is done, the local kubelet can talk again to the API, the cluster-api evicts the pods, and the old node is deleted.

I believe these issues may be related:

https://github.com/kubernetes-sigs/cluster-api/issues/2937 https://github.com/kubernetes-sigs/cluster-api/issues/2652

As a work around, I created the following script along with a systemd service which gets installed into all control plane nodes. This setup allows the rolling update to occur without manual interaction.

Script:

#!/usr/bin/env bash

set -o errexit
set -o nounset
set -o pipefail

EIP=$1

while true; do
    rc=0
    curl -fksS --retry 9 --retry-connrefused --retry-max-time 180 https://$EIP:6443/healthz || rc=$?
    if [[ $rc -eq 7 ]]; then
        echo "removing EIP $EIP"
        ifdown lo:0
        ip addr del $EIP/32 dev lo || true
        break
    fi
    echo ""
    sleep $(($RANDOM % 15))
done

postKubeadmCommands addition:

        cat <<EOT > /etc/systemd/system/packet-eip-health.service
        [Unit]
        Description=Packet EIP health check
        Wants=kubelet.service
        After=kubelet.service

        [Service]
        Type=simple
        Restart=on-failure
        ExecStart=/usr/local/bin/packet-eip-health.sh {{ .controlPlaneEndpoint }}

        [Install]
        WantedBy=multi-user.target
        EOT

        systemctl daemon-reload
        systemctl enable packet-eip-health
        systemctl start packet-eip-health
deitch commented 4 years ago

Let me see if I understand this. When I saw node, I mean "control plane node"

  1. node A is in good state
  2. node B is brought up
  3. node A needs to be brought down
  4. node A apiserver goes down
  5. CCM sees node A apiserver is down, switches EIP to node B
  6. CAPI kills etcd on node A
  7. node A still has some processes that need to talk to etcd, no longer can talk local, so try to talk to the loadbalancer EIP
  8. node A still has EIP configured locally, so it tries to reach etcd locally, fails

Is that correct?

jhead-slg commented 4 years ago

Yes, that is mostly correct. I believe step 6 happens after step 3 which causes the API to die as well.

deitch commented 4 years ago

So what really needs to happen is, once node A goes down (step 4), it needs the local IP routing removed. Correct?

jhead-slg commented 4 years ago

Correct.

deitch commented 4 years ago

Thanks for the clarity. It would be nice not to have to deal with the IP locally at all. E.g. if the EIP were 100.10.10.10, and the node IPs were 100.10.10.20 and 100.10.10.30, then it would work perfectly. The problem is you need a real load balancer doing inbound NAT (changing the dst IP on the packet that hits the host) in front of it to get there, rather than lower-level network primitives (routers and switches).

BGP helps, but doesn't completely solve it. Same with EIP. FWIW, the Kubernetes kube-proxy also helps, as it sets up iptables rules, independent of the local routes. I wouldn't mind trying to leverage that, but kube-proxy is, essentially, global. All hosts have it, and the rules are the same across all of them.

CCM itself is a Deployment with replicas=1, so it cannot control the IP addr/routes/iptables on a different host, unless we deploy another DaemonSet.

deitch commented 4 years ago

Also, your fix works well when installing via CAPP (hence the issue on this repo), but the EIP is controlled via CCM, and needs to account for non-CAPP situations.

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 3 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 3 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 3 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/cluster-api-provider-packet/issues/163#issuecomment-751668742): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
cprivitere commented 5 months ago

/reopen

cprivitere commented 5 months ago

/remove-lifecycle rotten

k8s-ci-robot commented 5 months ago

@cprivitere: Reopened this issue.

In response to [this](https://github.com/kubernetes-sigs/cluster-api-provider-packet/issues/163#issuecomment-2007146614): >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
cprivitere commented 5 months ago

This should be tested with the latest CPEM to see if the daemon set changes resolves it.

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 1 week ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 week ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/cluster-api-provider-packet/issues/163#issuecomment-2293682000): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.