kubernetes / kubectl

Issue tracker and mirror of kubectl code
Apache License 2.0
2.84k stars 919 forks source link

Support patch with selector #909

Open mbrancato opened 4 years ago

mbrancato commented 4 years ago

What would you like to be added: I came across a use-case where I would like to patch multiple things at once. Without looking I did:

$ kubectl patch nodes ...
error: resource(s) were provided, but no name, label selector, or --all flag specified

So seeing that error, I quickly remedied it by adding a selector as the error says I should do:

$ kubectl patch nodes --selector='mylabel' ...
Error: unknown flag: --selector

Well, interesting.

Why is this needed: This is obviously possible in a two step process with get nodes and a loop, but it would seem nice to be able to patch multiple things using a selector.

eddiezane commented 4 years ago

This error is coming from the generic resource builder here and here.

I'm not sure if a conscious decision was made to not support label selectors or --all with patch. If so we might want to change that error to avoid confusion.

@mbrancato to help drill down the action here what's your use case for patching multiple nodes?

/triage needs-information

mbrancato commented 4 years ago

@eddiezane I wanted to remove labels on multiple nodes, but I guess this applies to anyone wanting to relabel an existing resource.

eddiezane commented 4 years ago

@mbrancato the kubectl label command allows you to remove labels with the syntax of foo- to remove the label foo. This works with label selectors and --all.

 ~ kubectl label nodes foo=bar --all
node/pikube-0 labeled
node/pikube-2 labeled
node/pikube-1 labeled
 ~ kubectl get nodes -l foo=bar
NAME       STATUS   ROLES    AGE    VERSION
pikube-0   Ready    master   237d   v1.18.6+k3s1
pikube-2   Ready    <none>   237d   v1.18.6+k3s1
pikube-1   Ready    <none>   237d   v1.18.6+k3s1
 ~ kubectl label nodes foo- --all
node/pikube-0 labeled
node/pikube-2 labeled
node/pikube-1 labeled
 ~ kubectl get nodes -l foo=bar
No resources found.
 ~ kubectl label nodes pikube-0 foo=bar
node/pikube-0 labeled
 ~ kubectl get nodes -l foo=bar
NAME       STATUS   ROLES    AGE    VERSION
pikube-0   Ready    master   237d   v1.18.6+k3s1
 ~ kubectl label nodes -l foo=bar foo-
node/pikube-0 labeled
 ~ kubectl get nodes -l foo=bar
No resources found.

It's tucked away as the last example of the help text. We could probably make that more clear.

Does that solve your use case?

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 3 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 3 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 3 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes/kubectl/issues/909#issuecomment-758494934): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
tnozicka commented 3 years ago

This is still needed. I run into this yesterday when I wanted to clear conditions with a patch for all instances of my CRD.

/reopen /sig cli

k8s-ci-robot commented 3 years ago

@tnozicka: Reopened this issue.

In response to [this](https://github.com/kubernetes/kubectl/issues/909#issuecomment-758587069): >This is still needed. I run into this yesterday when I wanted to clear conditions with a patch for all instances of my CRD. > >/reopen >/sig cli Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
tnozicka commented 3 years ago

/remove-lifecycle rotten

joejulian commented 3 years ago

I found this wanting to mass remove finalizers in order to clean out a namespace.

eddiezane commented 3 years ago

/triage accepted

k8s-triage-robot commented 3 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 3 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-ci-robot commented 2 years ago

@k8s-triage-robot: Closing this issue.

In response to [this](https://github.com/kubernetes/kubectl/issues/909#issuecomment-950188472): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues and PRs according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue or PR with `/reopen` >- Mark this issue or PR as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
tnozicka commented 11 months ago

still very much needed

/reopen /assign

k8s-ci-robot commented 11 months ago

@tnozicka: Reopened this issue.

In response to [this](https://github.com/kubernetes/kubectl/issues/909#issuecomment-1789033505): >still very much needed > >/reopen >/assign Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
tnozicka commented 11 months ago

I have a PR ready in https://github.com/kubernetes/kubernetes/pull/121673 /remove-lifecycle stale

tnozicka commented 11 months ago

/remove-lifecycle rotten

tnozicka commented 11 months ago

/remove-triage needs-information (already marked as triage/accepted)

taylorpaul commented 2 months ago

I know there is a pending pull request here, but posting my hacky solution for those looking to have a workaround in the meantime. In my case I had about 50 jobs starting with batch-19 that I wanted to patch the number of parallel pods permitted. I get all the jobs, use awk to filter by the batch number and then patch all of these jobs at once:

# Code for filtering relevant resources, returning space separated list:
kubectl get jobs -n demo | awk 'BEGIN { ORS=" "}; {if ($1 ~ "batch-19") print $1}'

# Command for patching that list of resources:
kubectl patch jobs -n demo $(kubectl get jobs -n demo | awk 'BEGIN { ORS=" "}; {if ($1 ~ "batch-19") print $1}') --type=strategic --patch '{"spec":{"parallelism":4}}'