kubernetes / kubectl

Issue tracker and mirror of kubectl code
Apache License 2.0
2.82k stars 908 forks source link

kubectl diff requires update / patch permission #981

Open mbrancato opened 3 years ago

mbrancato commented 3 years ago

What happened:

Usually, when we think about using diff to compare two things - write access is not required. In this case, it seems kubectl diff requires write access to compare the current and future state. This might be an upstream problem if this is due to the server side apply.

When doing kubectl diff on a Deployment:

Error from server (Forbidden): deployments.extensions "my-super-deployment" is forbidden: 
User "myuser" cannot patch resource "deployments" in API group "extensions" in the namespace 
"mynamespace": requires one of ["container.deployments.update"] permission(s).

It seems to require PATCH / PUT permission under /apis/apps/v1/namespaces/{namespace}/deployments/* or older /apis/extensions/v1beta1/namespaces/{namespace}/deployments/*.

What you expected to happen:

Diff output to be shown.

How to reproduce it (as minimally and precisely as possible):

Attempt to diff a Deployment resource from a user lacking write access to the deployment. The permission in the example above is mapped to GKE specifically.

Anything else we need to know?:

Environment:

eddiezane commented 3 years ago

kubectl supports +1/-1 version of the api server so your v1.19.3 client is pretty off from our version skew policy.

You could try using an older kubectl version or upgrading your cluster to a supported version.

Please reopen if you run into this with a supported version.

/close

k8s-ci-robot commented 3 years ago

@eddiezane: Closing this issue.

In response to [this](https://github.com/kubernetes/kubectl/issues/981#issuecomment-731614193): >kubectl supports +1/-1 version of the api server so your v1.19.3 client is pretty off from our [version skew policy](https://kubernetes.io/docs/setup/release/version-skew-policy/#kubectl). > >You could try using an [older kubectl version](https://downloadkubernetes.com/) or upgrading your cluster to a [supported version](https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-versions). > >Please reopen if you run into this with a supported version. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
mbrancato commented 3 years ago

Hi @eddiezane

Here is a minimal example: uses - https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/controllers/nginx-deployment.yaml as test.yaml

$ kubectl apply -f test.yaml 
deployment.apps/nginx-deployment created
$ kubectl get deployment
NAMESPACE            NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
default              nginx-deployment         3/3     3            3           14s
$ kubectl create clusterrolebinding view --group=view  --clusterrole=view
clusterrolebinding.rbac.authorization.k8s.io/view created
$ kubectl get deployment --as=kubernetes-admin --as-group=view 
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           20m
$ kubectl apply -f test-modified.yaml --as=kubernetes-admin --as-group=view 
Error from server (Forbidden): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"nginx\"},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"app\":\"nginx\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":88}]}]}}}}\n"}},"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"nginx"}],"containers":[{"$setElementOrder/ports":[{"containerPort":88}],"name":"nginx","ports":[{"containerPort":88},{"$patch":"delete","containerPort":80}]}]}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx-deployment", Namespace: "default"
for: "test.yaml": deployments.apps "nginx-deployment" is forbidden: User "kubernetes-admin" cannot patch resource "deployments" in API group "apps" in the namespace "default"
$ kubectl diff -f test-modified.yaml --as=kubernetes-admin --as-group=view 
Error from server (Forbidden): deployments.apps "nginx-deployment" is forbidden: User "kubernetes-admin" cannot patch resource "deployments" in API group "apps" in the namespace "default"
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-12T01:08:32Z", GoVersion:"go1.15.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.1", GitCommit:"206bcadf021e76c27513500ca24182692aabd17e", GitTreeState:"clean", BuildDate:"2020-09-14T07:30:52Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

/reopen

k8s-ci-robot commented 3 years ago

@mbrancato: Reopened this issue.

In response to [this](https://github.com/kubernetes/kubectl/issues/981#issuecomment-732691626): >Hi @eddiezane > >Here is a minimal example: >uses - https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/controllers/nginx-deployment.yaml as `test.yaml` > >``` >$ kubectl apply -f test.yaml >deployment.apps/nginx-deployment created >$ kubectl get deployment >NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE >default nginx-deployment 3/3 3 3 14s >$ kubectl create clusterrolebinding view --group=view --clusterrole=view >clusterrolebinding.rbac.authorization.k8s.io/view created >$ kubectl get deployment --as=kubernetes-admin --as-group=view >NAME READY UP-TO-DATE AVAILABLE AGE >nginx-deployment 3/3 3 3 20m >$ kubectl apply -f test-modified.yaml --as=kubernetes-admin --as-group=view >Error from server (Forbidden): error when applying patch: >{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"nginx\"},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"app\":\"nginx\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":88}]}]}}}}\n"}},"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"nginx"}],"containers":[{"$setElementOrder/ports":[{"containerPort":88}],"name":"nginx","ports":[{"containerPort":88},{"$patch":"delete","containerPort":80}]}]}}}} >to: >Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment" >Name: "nginx-deployment", Namespace: "default" >for: "test.yaml": deployments.apps "nginx-deployment" is forbidden: User "kubernetes-admin" cannot patch resource "deployments" in API group "apps" in the namespace "default" >$ kubectl diff -f test-modified.yaml --as=kubernetes-admin --as-group=view >Error from server (Forbidden): deployments.apps "nginx-deployment" is forbidden: User "kubernetes-admin" cannot patch resource "deployments" in API group "apps" in the namespace "default" >``` > >``` >$ kubectl version >Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-12T01:08:32Z", GoVersion:"go1.15.4", Compiler:"gc", Platform:"darwin/amd64"} >Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.1", GitCommit:"206bcadf021e76c27513500ca24182692aabd17e", GitTreeState:"clean", BuildDate:"2020-09-14T07:30:52Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"} >``` > >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
julianvmodesto commented 3 years ago

Yes this is the current expected behavior:

https://kubernetes.io/docs/reference/using-api/api-concepts/#dry-run-authorization

This issue is a duplicate of https://github.com/kubernetes/kubernetes/issues/95449

eddiezane commented 3 years ago

Thanks @julianvmodesto. Let's track this there.

/close

k8s-ci-robot commented 3 years ago

@eddiezane: Closing this issue.

In response to [this](https://github.com/kubernetes/kubectl/issues/981#issuecomment-736085933): >Thanks @julianvmodesto. Let's track this there. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
mbrancato commented 3 years ago

I'll move discussion there, but as the title says my issue is the patch permission requirement and specifically for kubectl diff which I think is an abstraction from this other issue. I tried not to provide a solution but I thought falling back to client dry run may be a solution.

tonybenchsci commented 3 years ago

Relates to https://github.com/kubernetes/kubernetes/issues/95449

mbrancato commented 3 years ago

Reopening this. The other discussion went nowhere.

To maybe make it more clear, comparing the deployed state and a local manifest should not be relying on write or patch permissions to a cluster. And I think that is where the misunderstanding is. At a minimum, the diff should fallback to a client-side comparison.

The same issue of kubectl diff using PATCH on the server (even with --server-side=false) applies to other things that are astonishingly broken like doing kubectl diff on an ingress with an admission controller. The PATCH triggers an admission controller and can cause the diff to fail.

e.g.

% kubectl diff --server-side=false -f ingress-site.yaml
Error from server (BadRequest): admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "site.company.com" and path "/" is already defined in ingress dev/ingress-site

/reopen

k8s-ci-robot commented 3 years ago

@mbrancato: Reopened this issue.

In response to [this](https://github.com/kubernetes/kubectl/issues/981#issuecomment-901939608): >Reopening this. The other discussion went nowhere. > >To maybe make it more clear, comparing the deployed state and a local manifest should not be relying on write or patch permissions to a cluster. And I think that is where the misunderstanding is. At a minimum, the `diff` should fallback to a client-side comparison. > >The same issue of `kubectl diff` using `PATCH` on the server (even with `--server-side=false`) applies to other things that are astonishingly broken like doing `kubectl diff` on an ingress with an admission controller. The `PATCH` triggers an admission controller and can cause the diff to fail. > >e.g. >``` >% kubectl diff --server-side=false -f ingress-site.yaml >Error from server (BadRequest): admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "site.company.com" and path "/" is already defined in ingress dev/ingress-site >``` >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

eddiezane commented 2 years ago

@mbrancato apologies for the friction here.

You are correct and this appears to be a valid issue. A client side only diff should not be making a PATCH request. This is because it's currently hard coded to use dry run. This is a bug and should be fixed.

https://github.com/kubernetes/kubernetes/blob/23ee308ed7ed7c80f7e4d0437f38400f8df6faf8/staging/src/k8s.io/kubectl/pkg/cmd/diff/diff.go#L350

/triage accepted /remove-lifecycle rotten /priority backlog

julianvmodesto commented 2 years ago

One point of friction is that the --server-side flag name is misleading. The --server-side flag indicates if the new patch is created with server-side apply or client-side apply. Ideally this would be clearer for kubectl diff with somethign like --server-side-apply.

I don't think this is a bug because diff is intended to only diff a live object against the new patched object.

It sounds like this request is for a client-side version of diff, with the live object retrieved from the server diffed against a patched object that's constructed locally. I think this sounds close to https://github.com/kubernetes/kubectl/issues/1147.

sklirg commented 2 years ago

I don't think this is a bug because diff is intended to only diff a live object against the new patched object.

If that is the case, I think the help text from kubectl diff --help is misleading:

$ kubectl diff --help
Diff configurations specified by file name or stdin between the current online configuration, and the configuration as
it would be if applied.
...

I interpret this as being able to diff the current online configuration with some offline configuration, and be able to see what would change if the offline configuration was applied to the online configuration. As long as I have permission to get the current online configuration, I would expect to be able to see what would change as well.

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-ci-robot commented 2 years ago

@k8s-triage-robot: Closing this issue.

In response to [this](https://github.com/kubernetes/kubectl/issues/981#issuecomment-1223401344): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues and PRs according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue or PR with `/reopen` >- Mark this issue or PR as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
blurpy commented 1 year ago

Can this be reopened again? Read only diff would be extremely useful, especially in pipelines for pull requests.

mbrancato commented 1 year ago

/reopen

k8s-ci-robot commented 1 year ago

@mbrancato: Reopened this issue.

In response to [this](https://github.com/kubernetes/kubectl/issues/981#issuecomment-1517666444): >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
jgogstad commented 1 year ago

if you landed here looking for a way to incorporate kubectl diff into CI with read-only access, see [https://github.com/weaveworks/kubediff]. It performs the diff by reading all the objects in a k8s namespace and then compare them to a set of local manifests. No PATCH permissions necessary.

edit: the project looks dead, the markdown diff output is kind of crap (e.g. it will say "Error: Missing" if something is missing on the server), and the json output is limited.

paololazzari commented 5 months ago

Any updates?

imans777 commented 1 month ago

It's weird how such useful and obvious command doesn't work as expected and it's help command is misleading. Any updates would be appreciated.