Open mbrancato opened 3 years ago
kubectl supports +1/-1 version of the api server so your v1.19.3 client is pretty off from our version skew policy.
You could try using an older kubectl version or upgrading your cluster to a supported version.
Please reopen if you run into this with a supported version.
/close
@eddiezane: Closing this issue.
Hi @eddiezane
Here is a minimal example:
uses - https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/controllers/nginx-deployment.yaml as test.yaml
$ kubectl apply -f test.yaml
deployment.apps/nginx-deployment created
$ kubectl get deployment
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default nginx-deployment 3/3 3 3 14s
$ kubectl create clusterrolebinding view --group=view --clusterrole=view
clusterrolebinding.rbac.authorization.k8s.io/view created
$ kubectl get deployment --as=kubernetes-admin --as-group=view
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 20m
$ kubectl apply -f test-modified.yaml --as=kubernetes-admin --as-group=view
Error from server (Forbidden): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"nginx\"},\"name\":\"nginx-deployment\",\"namespace\":\"default\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"app\":\"nginx\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"nginx\"}},\"spec\":{\"containers\":[{\"image\":\"nginx:1.14.2\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":88}]}]}}}}\n"}},"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"nginx"}],"containers":[{"$setElementOrder/ports":[{"containerPort":88}],"name":"nginx","ports":[{"containerPort":88},{"$patch":"delete","containerPort":80}]}]}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx-deployment", Namespace: "default"
for: "test.yaml": deployments.apps "nginx-deployment" is forbidden: User "kubernetes-admin" cannot patch resource "deployments" in API group "apps" in the namespace "default"
$ kubectl diff -f test-modified.yaml --as=kubernetes-admin --as-group=view
Error from server (Forbidden): deployments.apps "nginx-deployment" is forbidden: User "kubernetes-admin" cannot patch resource "deployments" in API group "apps" in the namespace "default"
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-12T01:08:32Z", GoVersion:"go1.15.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.1", GitCommit:"206bcadf021e76c27513500ca24182692aabd17e", GitTreeState:"clean", BuildDate:"2020-09-14T07:30:52Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
/reopen
@mbrancato: Reopened this issue.
Yes this is the current expected behavior:
https://kubernetes.io/docs/reference/using-api/api-concepts/#dry-run-authorization
This issue is a duplicate of https://github.com/kubernetes/kubernetes/issues/95449
Thanks @julianvmodesto. Let's track this there.
/close
@eddiezane: Closing this issue.
I'll move discussion there, but as the title says my issue is the patch permission requirement and specifically for kubectl diff
which I think is an abstraction from this other issue. I tried not to provide a solution but I thought falling back to client dry run may be a solution.
Reopening this. The other discussion went nowhere.
To maybe make it more clear, comparing the deployed state and a local manifest should not be relying on write or patch permissions to a cluster. And I think that is where the misunderstanding is. At a minimum, the diff
should fallback to a client-side comparison.
The same issue of kubectl diff
using PATCH
on the server (even with --server-side=false
) applies to other things that are astonishingly broken like doing kubectl diff
on an ingress with an admission controller. The PATCH
triggers an admission controller and can cause the diff to fail.
e.g.
% kubectl diff --server-side=false -f ingress-site.yaml
Error from server (BadRequest): admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "site.company.com" and path "/" is already defined in ingress dev/ingress-site
/reopen
@mbrancato: Reopened this issue.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
@mbrancato apologies for the friction here.
You are correct and this appears to be a valid issue. A client side only diff should not be making a PATCH request. This is because it's currently hard coded to use dry run. This is a bug and should be fixed.
/triage accepted /remove-lifecycle rotten /priority backlog
One point of friction is that the --server-side
flag name is misleading. The --server-side
flag indicates if the new patch is created with server-side apply or client-side apply. Ideally this would be clearer for kubectl diff
with somethign like --server-side-apply
.
I don't think this is a bug because diff is intended to only diff a live object against the new patched object.
It sounds like this request is for a client-side version of diff, with the live object retrieved from the server diffed against a patched object that's constructed locally. I think this sounds close to https://github.com/kubernetes/kubectl/issues/1147.
I don't think this is a bug because diff is intended to only diff a live object against the new patched object.
If that is the case, I think the help text from kubectl diff --help
is misleading:
$ kubectl diff --help
Diff configurations specified by file name or stdin between the current online configuration, and the configuration as
it would be if applied.
...
I interpret this as being able to diff the current online configuration with some offline configuration, and be able to see what would change if the offline configuration was applied to the online configuration. As long as I have permission to get the current online configuration, I would expect to be able to see what would change as well.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
Can this be reopened again? Read only diff would be extremely useful, especially in pipelines for pull requests.
/reopen
@mbrancato: Reopened this issue.
if you landed here looking for a way to incorporate kubectl diff
into CI with read-only access, see [https://github.com/weaveworks/kubediff]. It performs the diff by reading all the objects in a k8s namespace and then compare them to a set of local manifests. No PATCH permissions necessary.
edit: the project looks dead, the markdown diff output is kind of crap (e.g. it will say "Error: Missing" if something is missing on the server), and the json output is limited.
Any updates?
It's weird how such useful and obvious command doesn't work as expected and it's help command is misleading. Any updates would be appreciated.
What happened:
Usually, when we think about using
diff
to compare two things - write access is not required. In this case, it seemskubectl diff
requires write access to compare the current and future state. This might be an upstream problem if this is due to the server side apply.When doing
kubectl diff
on a Deployment:It seems to require PATCH / PUT permission under
/apis/apps/v1/namespaces/{namespace}/deployments/*
or older/apis/extensions/v1beta1/namespaces/{namespace}/deployments/*
.What you expected to happen:
Diff output to be shown.
How to reproduce it (as minimally and precisely as possible):
Attempt to diff a Deployment resource from a user lacking write access to the deployment. The permission in the example above is mapped to GKE specifically.
Anything else we need to know?:
Environment:
kubectl version
):cat /etc/os-release
): n/a