kubernetes / kops

Kubernetes Operations (kOps) - Production Grade k8s Installation, Upgrades and Management
https://kops.sigs.k8s.io/
Apache License 2.0
15.85k stars 4.64k forks source link

A watchdog to restart nodes in NotReady state #15606

Closed victor-sudakov closed 1 month ago

victor-sudakov commented 1 year ago

/kind feature

1. Describe IN DETAIL the feature/behavior/change you would like to see.

There are cases when a node is NotReady from the point of view of Kubernets/kOps, but is healthy from the point of view of the corresponding AWS autoscaling group. The easiest way to reproduce this situation is to stop the kubelet service on a node. A node will stay in the NotReady state forever after that. What is worse, pods using PVCs will never be rescheduled from such a node to other nodes because EBS volumes are still attached to the NotReady node and cannot be detached until it is stopped or terminated.

Can we add some watchdog addon which would signal aws autoscaling set-instance-health --health-status Unhealthy or something similar to AWS when a node has been NotReady for a certain configured time?

This would allow clusters to heal themselves.

hakman commented 1 year ago

ClusterAutoscaler might be a good fit for this use case. They recently added a new flag --scale-down-unready-enabled. https://github.com/kubernetes/autoscaler/pull/5537

victor-sudakov commented 1 year ago

I have always been wary of the ClusterAutoscaler due to its complexity, so I have never used it. For example from https://kops.sigs.k8s.io/addons/#cluster-autoscaler I get the idea that I have to specify the latest supported image of ClusterAutocaler for the specified kubernetes version - does it mean that I will have to change this manually in the manifest on each kops upgrade cluster run?

In short, I think it's an overkill for my rather simple tasks and will add a lot of admin overhead. Do you think there is an alternative simpler solution?

hakman commented 1 year ago

I have always been wary of the ClusterAutoscaler due to its complexity, so I have never used it. For example from https://kops.sigs.k8s.io/addons/#cluster-autoscaler I get the idea that I have to specify the latest supported image of ClusterAutocaler for the specified kubernetes version - does it mean that I will have to change this manually in the manifest on each kops upgrade cluster run?

There are 3 options here:

  1. You wait for a contributor to add these args for cluster-autoscaler to kOps
  2. You make a PR to add these args to kOps
  3. You deploy and manage cluster-autoscaler yourself with these args

In short, I think it's an overkill for my rather simple tasks and will add a lot of admin overhead. Do you think there is an alternative simpler solution?

Check if https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#defaulttolerationseconds helps. Once nodes are empty, cluster-autoscaler should terminate them.

victor-sudakov commented 1 year ago

There are 3 options here: You wait for a contributor to add these args for cluster-autoscaler to kOps

Can you please confirm, do I understand correctly then that the installation of ClusterAutoscaler in its current state will indeed break the automation of kops upgrade cluster? Or was it my erroneous interpretation of the docs?

Once nodes are empty, cluster-autoscaler should terminate them.

The affected nodes will never be empty because their pods using PVCs will never be rescheduled from such nodes to other ones because EBS volumes are still attached to the NotReady node and cannot be detached until it is stopped or terminated. You need a node to be really terminated to free and reattach the EBS volume which backs the PV.

Snippet from the log of such a situation:

Warning  FailedAttachVolume  3m9s  attachdetach-controller  Multi-Attach error for volume "pvc-bb834416-d65f-4d85-b7bf-8f64e2e62786" Volume is already exclusively attached to one node and can't be attached to another
Warning  FailedMount         66s   kubelet                  Unable to attach or mount volumes: unmounted volumes=[db-rm], unattached volumes=[db-rm kube-api-access-ss8w5]: timed out waiting for the condition
hakman commented 1 year ago

Can you please confirm, do I understand correctly then that the installation of ClusterAutoscaler in its current state will indeed break the automation of kops upgrade cluster? Or was it my erroneous interpretation of the docs?

You cannot use kOps to manage an addon (like cluster-autoscaler) and modify the manifests later. kOps will eventually notice and remove the changes.

victor-sudakov commented 1 year ago

You cannot use kOps to manage an addon (like cluster-autoscaler) and modify the manifests later. kOps will eventually notice and remove the changes.

Sorry, I don't quite understand you. So, when I add the autoscaler addon, I specify a particular version in spec.image, correct? Then, when I have to upgrade the cluster via kops upgrade cluster, what should I do with the addon and with the value of spec.image? Update it manually, and at what moment?

hakman commented 1 year ago

The problem is not the spec.image. That will be kept. You will not be able to change --scale-down-unneeded-time.

PS: Just try, it's easy to test your assumptions on a test cluster.

victor-sudakov commented 1 year ago

I guess I don't need it kept, I need it updated automatically to the latest version the cluster is running.

Of course, I'll experiment with the addon, but I still think it's an overkill for the simple task of restarting NotReady nodes just to free EBS volumes. The documentation itself for the addon is overwhelming.

victor-sudakov commented 1 year ago

Want something like this just for kOps: https://docs.digitalocean.com/developer-center/automatic-node-repair-on-digitalocean-kubernetes/

hakman commented 1 year ago

What you want will mostly work on AWS, not on most other supported cloud providers. kOps uses cluster autoscaler in general, which already has the feature, so most likely there will be no such feature added. The --scale-down-unready-enabled is just added to disable the feature you want. The functionality was there for some time (probably 4+ years).

Add flag '--scale-down-unready-enabled' to enable or disable scale-down of unready nodes. Default value set to true for backwards compatibility (i.e., allow scale-down of unready nodes). There are cases where a user may not want the unready nodes to be removed from a cluster. As an example, but not limited to, this might be useful in case a node is unreachable for a period of time and local data live there, the node shall remain in the cluster, and possibly an admin may want to take any actions for recovering it.

victor-sudakov commented 1 year ago

I have not received a definite answer, can you please tell me: does the autoscaler bring additional administrative overhead after installation into the cluster?

hakman commented 1 year ago

All you have to do is to enable it. If you want a newer image, kOps will not overwrite it on update/upgrade.

  clusterAutoscaler:
    enabled: true
victor-sudakov commented 1 year ago

Sorry, does not solve the problem. I installed the autoscaler and metrics server:

clusterAutoscaler:
  enabled: true
metricsServer:
  enabled: true
  insecure: true

did a rolling update of the cluster, then SSH-ed into a node and stopped the kubelet. The node became NotReady in the "kubectl get nodes" output and has been in this state for 28 minutes already. No attempt has been made to restart or terminate it.

Probably to use the autoscaler as a NotReady watchdog, some more non-default configuration is needed?

hakman commented 1 year ago

Could be combined with other timers. I would suggest to wait 1 hour and see. Also, check cluster-autoscaler logs to see what it things of the node.

DefaultScaleDownUnreadyTime = 20 * time.Minute
victor-sudakov commented 1 year ago

An hour is too much for cluster recovery, but I will wait and report.

victor-sudakov commented 1 year ago

PS an hour is too much because a node in a NotReady state does not release its EBS volumes and the statefulSets with PVCs stop working for the whole time a node is not ready. In this context, even 5 minutes is too much.

olemarkus commented 1 year ago

I think you are perhaps looking for the wrong solution. When nodes become not-ready, pods should be evicted. The time before eviction is also configurable. See https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-based-evictions

PVCs certainly follow the pod regardless of the instance state. EBS controller will take care to unattach and reattach EBS volumes as needed.

victor-sudakov commented 1 year ago

When nodes become not-ready, pods should be evicted.

That is a different story. Even if all the pods are successfully evicted from the node, it is not good to keep the NotReady node forever. There should be some self-repair mechanism in the cluster, like the one used in the Digital Ocean at the link 9 messages above.

olemarkus commented 1 year ago

If the node has been (mostly) evicted, i.e its utilisation is below the configured CAS threshold, then CAS will terminate it. There is a huge difference between failing workloads and failing instances. Besides, you also want the not-ready node to stick around so you can determine the cause of failure.

victor-sudakov commented 1 year ago

I would suggest to wait 1 hour and see.

An hour has passed and still ... nada. The log of one of the three autoscaler pods: https://termbin.com/z2vu

victor-sudakov commented 1 year ago

Besides, you also want the not-ready node to stick around so you can determine the cause of failure.

Maybe I do if a replacement node has been started. But not the way it works now when the failed node is just marked NotReady and no replacement is started.

olemarkus commented 1 year ago

The logs say that it shouldn't scale down because you are at minimum capacity anyway. And there is no need for additional capacity in the cluster. This is not a very likely scenario in a cluster that has actual workloads.

victor-sudakov commented 1 year ago

And no eviction happens either. The statefulSet-managed pods are just stuck in the "Terminating" status on the NotReady node. I can see the NotReady node has the node.kubernetes.io/unreachable:NoExecute and node.kubernetes.io/unreachable:NoSchedule taints but the pod is there and has not been rescheduled anywhere.

You know what? Could you reproduce this for me?

  1. Create a kOps cluster on AWS.
  2. Create a statefulSet with one test pod and a PVC (make sure you configure a volumeClaimTemplate in the statefulSet and the pod has a volume attached).
  3. Disable kubelet on the node where your test pod is running (e.g. via SSH to the node). This we emulate a faulty node.
  4. Watch the node in the NotReady state forever and the test pod in the "Terminating" status forever, never rescheduled.
victor-sudakov commented 1 year ago

The logs say that it shouldn't scale down because you are at minimum capacity anyway. And there is no need for additional capacity in the cluster. This is not a very likely scenario in a cluster that has actual workloads.

This is because I really don't need autoscaling, I just need faulty node replacement. And you are right, this one is an experimental cluster.

olemarkus commented 1 year ago

Sorry I am not able to reproduce this now. If the pod hangs on termination, there is a finaliser blocking it, most likely.

hakman commented 1 year ago

See: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-ca-deal-with-unready-nodes

victor-sudakov commented 1 year ago

See: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-ca-deal-with-unready-nodes

Once there are more unready nodes in the cluster, CA stops all operations until the situation improves.

Totally does not sound like it would restart or recreate any nodes. Looks like the autoscaler just stops working if there are more unready nodes than a certain threshold.

victor-sudakov commented 1 year ago

Sorry I am not able to reproduce this now. If the pod hangs on termination, there is a finaliser blocking it, most likely.

Whatever is the problem with eviction it is not for this issue. I'll debug it and may open another issue or find the reason. Let's stick to kOps' self healing.

olemarkus commented 1 year ago

Sure. Happy to review a PR.

victor-sudakov commented 1 year ago

The question with automatically restarting NotReady nodes is still open. The autoscaler (at least with default settings) does not seem suitable for this purpose.

victor-sudakov commented 1 year ago

I think you are perhaps looking for the wrong solution. When nodes become not-ready, pods should be evicted.

@olemarkus After some reading on the internet, I got the idea that there are some architectural problems or peculiarities with pods managed by statefulsets. In fact, unless something has changed in the recent versions of Kubernetes, such pods never get evicted and this is intentional and by design. So there does not seem to be much left other than to automatically kill a failed node.

pgrunm commented 1 year ago

Hi, maybe Medik8s helps you. I didn't try it yet, but I found it a few weeks ago.

hakman commented 1 year ago

The new recovery from non-graceful node shutdown in K8s 1.28 may be of some interest.

victor-sudakov commented 1 year ago

The new recovery from non-graceful node shutdown in K8s 1.28 may be of some interest.

This can be nice and useful, but then again, it describes a manual procedure while my feature request is about an automatic mechanism allowing a kOps-managed cluster to self-heal.

PS Sorry for a late reply, I've been on vacation.

k8s-triage-robot commented 8 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 7 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 6 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 6 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/kops/issues/15606#issuecomment-2021566445): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
victor-sudakov commented 6 months ago

/reopen

k8s-ci-robot commented 6 months ago

@victor-sudakov: Reopened this issue.

In response to [this](https://github.com/kubernetes/kops/issues/15606#issuecomment-2021811684): >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
k8s-triage-robot commented 5 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 5 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/kops/issues/15606#issuecomment-2078560978): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
victor-sudakov commented 5 months ago

/reopen

k8s-ci-robot commented 5 months ago

@victor-sudakov: Reopened this issue.

In response to [this](https://github.com/kubernetes/kops/issues/15606#issuecomment-2078582684): >/reopen > Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
k8s-triage-robot commented 4 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 4 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/kops/issues/15606#issuecomment-2132060131): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
victor-sudakov commented 3 months ago

/reopen

k8s-ci-robot commented 3 months ago

@victor-sudakov: Reopened this issue.

In response to [this](https://github.com/kubernetes/kops/issues/15606#issuecomment-2132688379): >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 2 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/kops/issues/15606#issuecomment-2190771687): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.