kubernetes / kubectl

Issue tracker and mirror of kubectl code
Apache License 2.0
2.78k stars 903 forks source link

Include Namespace in `kubectl delete` Dry-Run Output #1621

Open totegamma opened 2 weeks ago

totegamma commented 2 weeks ago

What would you like to be added:

Prefix namespace to kubectl delete dry-run result.

current:

$ kubectl delete -f ... --dry-run=server
deployment.apps "myapp" deleted (server dry run)
$

proposed:

$ kubectl delete -f ... --dry-run=server
myapp-namespace deployment.apps "myapp" deleted (server dry run)
$

Why is this needed:

The current output is ambiguous, as the resource name should be identifiable as unique.

When working with multiple namespaces like "myapp-prod" and "myapp-dev", and intending to tear down some resources in "myapp-dev", the command might look like this:

$ kustomize build . | kubectl delete -f - --dry-run=server
deployment.apps "kube-prometheus-stack-kube-state-metrics" deleted (server dry run)
$

From this output, it's unclear whether the manifest targets "myapp-dev" or "myapp-prod". This ambiguity requires additional checks to ensure the correct namespace is being targeted.

Printing the namespace in the dry-run output would enhance clarity and confidence in identifying the targeted resources.

Other considerations This change can be applied for other operations like apply and replace. However, non-delete operations can be validate with the "diff" command. Therefore, I think it is acceptable to add this feature only for the delete operation.

sample implementation would be like this: https://github.com/totegamma/kubernetes/commit/65c18816d3bc8b47810d1230bbf88e8aef219a5e

If this issue accepted, I want to get assigned and make a PR.

Ritikaa96 commented 2 weeks ago

Hi @totegamma , The dry run in fact checks the namespace, it is just not notified in output ,If you try to delete a pod which exist in other namespace the error will come out right . Adding namespace like pod <name> (ns) deleted (server dry run) may add more clarity. however the developers usually knows that lil detail already.

totegamma commented 2 weeks ago

Hello @Ritikaa96,

Thank you for your reply.

Yes, I know there are no problems with the internal mechanism. I just want to notify the namespace in the output for clarity. Hiding the namespace is a little unkind.

When the applied manifest is too big, it is hard to grasp all resources. This often happens when we use generators such as Helm charts or Kustomize. We can check the manifest with other commands such as grep, but since kubectl already has a dry-run mode, it would be nice to print the namespace for clarity.

mpuckett159 commented 1 week ago

/triage accepted /good-first-issue

k8s-ci-robot commented 1 week ago

@mpuckett159: This request has been marked as suitable for new contributors.

Guidelines

Please ensure that the issue body includes answers to the following questions:

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed by commenting with the /remove-good-first-issue command.

In response to [this](https://github.com/kubernetes/kubectl/issues/1621): >/triage accepted >/good-first-issue Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
totegamma commented 1 week ago

/assign