kubernetes / kubectl

Issue tracker and mirror of kubectl code
Apache License 2.0
2.89k stars 924 forks source link

RFE: 'kubectl evict ...' or similar for conveniently evicting pods #1345

Open wking opened 2 years ago

wking commented 2 years ago

What would you like to be added?

A kubectl evict ... subcommand or similar syntactic sugar around the eviction API.

Why is this needed?

Using the delete API is convenient, but dangerous. For example:

$ kubectl -n openshift-monitoring delete pods prometheus-k8s-0 prometheus-k8s-1
pod "prometheus-k8s-0" deleted
pod "prometheus-k8s-1" deleted

Leaves an OpenShift cluster temporarily without monitoring. It's safer to use the eviction API to respect PodDisruptionBudgets, like:

$ kubectl create -f - <<EOF --raw /api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0/eviction
> {"apiVersion": "policy/v1", "kind": "Eviction", "metadata": {"name": "prometheus-k8s-0"}}
> EOF
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","code":201}
$ kubectl create -f - <<EOF --raw /api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-1/eviction
> {"apiVersion": "policy/v1", "kind": "Eviction", "metadata": {"name": "prometheus-k8s-1"}}
> EOF
Error from server (TooManyRequests): Cannot evict pod as it would violate the pod's disruption budget.

However, that's a bit of a mouthful, and requires a here-doc and duplicating the pod name. It might be possible to add --subresource to the create subcommand to support something like:

$ kubectl -n openshift-monitoring create --subresource eviction pod prometheus-k8s-0

But that would likely bump into the current guards that enforce redundant information between the URI path and the Eviction resource. It's not clear to me why the eviction handler can't backfill missing Eviction resource information like the name from the path information, or really, why the eviction handler cannot default the entire Eviction resource when the caller doesn't need to set explicit delete options. Possibly kubernetes/kubernetes#53185 touches on this, although I haven't wrapped my head around that yet.

Or, instead of trying to make create more flexible, we could grow a new subcommand like evict just for the eviction subresource.

But it would be nice to make the safer eviction API more convenient, so folks didn't have to decide between safety and convenience when bumping pods.

There is at least one existing plugin implementation here.

k8s-ci-robot commented 2 years ago

@wking: This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
wking commented 2 years ago

/sig cli

atiratree commented 2 years ago

This was discussed in sig-cli, and the outcome was that this should be adopted by community first as a kubectl krew plugin.

Luckily, it seems there is such a plugin called evict-pod, which has some recognition already: https://github.com/rajatjindal/kubectl-evict-pod.

The repo seems active, but it is missing some features for parametrizing the eviction (most notably gracePeriodSeconds). Hopefully, it should be easy to add by contributing to the repo.

sftim commented 1 year ago

I think this is a kubectl feature request, albeit one we don't plan to prioritize.

/transfer kubectl

eddiezane commented 1 year ago

We would like to see some community demand and interest before considering upstreaming a plugin with this functionality. Please react to the top level issue with a :+1: if you come across this and agree.

frivoire commented 1 year ago

I would be very interested by this feature šŸ˜ƒ, here is why:

We run stateful workloads on K8S: typically, a cluster of 3 pods. And our failure tolerance is: loosing 1 pod is ok, but not 2. Sometimes, we need to manually "restart" one precise pod (and not the others), because of a malfunction of that pod.

So our current procedure is :

  1. check that all pods are running & ready: either using kubectl get pods -l ... or using monitoring dashboard
  2. then execute kubectl delete pod/xxxxxx-i

But of course, this process is dangerous, because of "race condition": => another pod can be deleted (for another reason, cf below) between step 1 & 2 => so, the manual deletion will remove a 2nd pod at the same time => incident šŸ˜¢

Examples of situation that can create this "race condition":

With the proposed feature, our procedure would be simply:

  1. execute kubectl evict pod/xxxxxx-i

So, it's now safe (and simpler, but not the objective here) šŸ˜ƒ

sftim commented 1 year ago

I'd one day like to be able to evict pods by label. That could arrive first in the plugin, or be something that we implement in-tree.

Other plausible enhancements that build on this basic idea:

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 10 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 9 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten