Open DenverJ opened 2 years ago
Problem stems from that label
initializes new printer for each object https://github.com/kubernetes/kubernetes/blob/10eb7092f854c71122c03752465e868bce23c0b6/staging/src/k8s.io/kubectl/pkg/cmd/label/label.go#L390 unlike to annotate
.
/triage accepted /assign
Currently YAMLPrinter
only adds document separator(---
) if the object count is more than 1 for one YAMLPrinter object https://github.com/kubernetes/kubernetes/blob/e4fca6469022309bc0ace863fce73054a0219464/staging/src/k8s.io/cli-runtime/pkg/printers/yaml.go#L48.
However, for the command label
(and for a couple of more commands), there is no one YAMLPrinter
, instead to manage the message in printer object, YAMLPrinter
is initialized per object https://github.com/kubernetes/kubernetes/blob/10eb7092f854c71122c03752465e868bce23c0b6/staging/src/k8s.io/kubectl/pkg/cmd/label/label.go#L390.
That causes each YAMLPrinter
has it's count to 1. Therefore, they are not adding document separator.
This simply works for annotate
command because print message is not being changed for annotate command(there is also an issue for that problem https://github.com/kubernetes/kubernetes/issues/110123) and printer is initialized once in completed.
I wonder would it be possible to add document separator in any case without checking the counter in here https://github.com/kubernetes/kubernetes/blob/e4fca6469022309bc0ace863fce73054a0219464/staging/src/k8s.io/cli-runtime/pkg/printers/yaml.go#L48
@eddiezane @brianpursley @soltysh
/unassign
Hi! I would like to take a look at this issue and see if I can help with it. /assign
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
It looks like PR #110124 is already addressing this issue and has been hardened and reviewed for a while now. I'm not really contributing to sig-cli anymore so I'll not look further into this. Unassigning and closing my PR so another person can tackle it!
/unassign
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
/triage accepted
(org members only)/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
What happened: When running "kubectl label" with file input containing multiple objects no document separator is included in the output. This means only the last object will be picked up to be labelled on the cluster (or passed on to an apply command etc).
What you expected to happen: Multiple objects labelled with a document separator in between.
How to reproduce it (as minimally and precisely as possible):
Output:
Anything else we need to know?: The same process and data but using the "annotate" command instead of "label" works perfectly and includes document separators. as per output below.
Environment:
kubectl version
):