Open mbrancato opened 4 years ago
This error is coming from the generic resource builder here and here.
I'm not sure if a conscious decision was made to not support label selectors or --all
with patch. If so we might want to change that error to avoid confusion.
@mbrancato to help drill down the action here what's your use case for patching multiple nodes?
/triage needs-information
@eddiezane I wanted to remove labels on multiple nodes, but I guess this applies to anyone wanting to relabel an existing resource.
@mbrancato the kubectl label
command allows you to remove labels with the syntax of foo-
to remove the label foo
. This works with label selectors and --all
.
~ kubectl label nodes foo=bar --all
node/pikube-0 labeled
node/pikube-2 labeled
node/pikube-1 labeled
~ kubectl get nodes -l foo=bar
NAME STATUS ROLES AGE VERSION
pikube-0 Ready master 237d v1.18.6+k3s1
pikube-2 Ready <none> 237d v1.18.6+k3s1
pikube-1 Ready <none> 237d v1.18.6+k3s1
~ kubectl label nodes foo- --all
node/pikube-0 labeled
node/pikube-2 labeled
node/pikube-1 labeled
~ kubectl get nodes -l foo=bar
No resources found.
~ kubectl label nodes pikube-0 foo=bar
node/pikube-0 labeled
~ kubectl get nodes -l foo=bar
NAME STATUS ROLES AGE VERSION
pikube-0 Ready master 237d v1.18.6+k3s1
~ kubectl label nodes -l foo=bar foo-
node/pikube-0 labeled
~ kubectl get nodes -l foo=bar
No resources found.
It's tucked away as the last example of the help text. We could probably make that more clear.
Does that solve your use case?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
This is still needed. I run into this yesterday when I wanted to clear conditions with a patch for all instances of my CRD.
/reopen /sig cli
@tnozicka: Reopened this issue.
/remove-lifecycle rotten
I found this wanting to mass remove finalizers in order to clean out a namespace.
/triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
still very much needed
/reopen /assign
@tnozicka: Reopened this issue.
I have a PR ready in https://github.com/kubernetes/kubernetes/pull/121673 /remove-lifecycle stale
/remove-lifecycle rotten
/remove-triage needs-information (already marked as triage/accepted)
I know there is a pending pull request here, but posting my hacky solution for those looking to have a workaround in the meantime. In my case I had about 50 jobs starting with batch-19 that I wanted to patch the number of parallel pods permitted. I get all the jobs, use awk to filter by the batch number and then patch all of these jobs at once:
# Code for filtering relevant resources, returning space separated list:
kubectl get jobs -n demo | awk 'BEGIN { ORS=" "}; {if ($1 ~ "batch-19") print $1}'
# Command for patching that list of resources:
kubectl patch jobs -n demo $(kubectl get jobs -n demo | awk 'BEGIN { ORS=" "}; {if ($1 ~ "batch-19") print $1}') --type=strategic --patch '{"spec":{"parallelism":4}}'
What would you like to be added: I came across a use-case where I would like to patch multiple things at once. Without looking I did:
So seeing that error, I quickly remedied it by adding a selector as the error says I should do:
Well, interesting.
Why is this needed: This is obviously possible in a two step process with
get nodes
and a loop, but it would seem nice to be able to patch multiple things using a selector.