kyverno / kyverno

Cloud Native Policy Management
https://kyverno.io
Apache License 2.0
5.65k stars 861 forks source link

[Bug] Attempt to mutate with node metadata to pod label is not happening due to permission error on serviceaccount, but it is there #11431

Open SohamChakraborty opened 3 days ago

SohamChakraborty commented 3 days ago

Kyverno Version

1.12.5

Kubernetes Version

1.29.x

Kubernetes Platform

Other (specify in description)

Kyverno Rule Type

Mutate

Description

Environment: kOps running k8s version 1.29 Kyverno version is 1.12.6, but since that is not available in dropdown, I picked up 1.12.5.

Attempt to add node metadata as a label to pod is not successful due to permission error on serviceaccount even though the requisite permission is given.

Steps to reproduce

We are trying to add node topology keys as a label to pods (NOT as annotations, but as labels) and came up with this clusterpolicy:

apiVersion: kyverno.io/v2beta1
kind: ClusterPolicy
metadata:
  name: foobar
  annotations:
    pod-policies.kyverno.io/autogen-controllers: none
    policies.kyverno.io/title: Add scheduled Node's labels to a Pod
    policies.kyverno.io/category: Other
    policies.kyverno.io/subject: Pod
spec:
  rules:
    - name: client-rack-label
      match:
        any:
        - resources:
            kinds:
            - Pod/binding
      context:
      - name: node
        variable:
          jmesPath: request.object.target.name
          default: ''
      - name: clientracklabel
        apiCall:
          urlPath: "/api/v1/nodes/{{node}}"
          jmesPath: "metadata.labels.\"topology.kubernetes.io/zone\" || 'empty'"
      mutate:
        targets:
        - apiVersion: v1
          kind: Pod
          name: "{{ request.object.metadata.name }}"
          namespace: "{{ request.object.metadata.namespace }}"
        patchStrategicMerge:
          metadata:
            labels:
              client.rack: "{{ clientracklabel }}"

When applied, we get an error like this:

Error from server: error when creating "foobar.yaml": admission webhook "validate-policy.kyverno.svc" denied the request: path: spec.rules[0].mutate.targets.: auth check fails, additional privileges are required for the service account 'system:serviceaccount:kyverno:kyverno-background-controller': cannot update/v1/Pod in namespace {{ request.object.metadata.namespace }}

auth can-i tells this:

$ kubectl auth can-i update pods --as system:serviceaccount:kyverno:kyverno-background-controller
no

However the clusterrole has this set of permissions:

$ kubectl get clusterrole kyverno:background-controller -o yaml -n kyverno
aggregationRule:
  clusterRoleSelectors:
  - matchLabels:
      app.kubernetes.io/component: background-controller
      app.kubernetes.io/instance: kyverno
      app.kubernetes.io/part-of: kyverno
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    meta.helm.sh/release-name: kyverno
    meta.helm.sh/release-namespace: kyverno
  creationTimestamp: "2024-10-04T08:37:48Z"
  labels:
    app.kubernetes.io/component: background-controller
    app.kubernetes.io/instance: kyverno
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/part-of: kyverno
    app.kubernetes.io/version: 3.2.7
    helm.sh/chart: kyverno-3.2.7
  name: kyverno:background-controller
  resourceVersion: "146896916"
  uid: 7ad68b63-955c-42ca-98aa-7bcd5f7bdd0f
rules:
- apiGroups:
  - ""
  resources:
  - pod
  verbs:
  - create
  - update
  - patch
  - delete
- apiGroups:
  - apiextensions.k8s.io
  resources:
  - customresourcedefinitions
  verbs:
  - get
- apiGroups:
  - kyverno.io
  resources:
  - policies
  - clusterpolicies
  - policyexceptions
  - updaterequests
  - updaterequests/status
  - globalcontextentries
  - globalcontextentries/status
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch
  - deletecollection
- apiGroups:
  - ""
  resources:
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  - events.k8s.io
  resources:
  - events
  verbs:
  - create
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - '*'
  resources:
  - '*'
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  - ingressclasses
  - networkpolicies
  verbs:
  - create
  - update
  - patch
  - delete
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - pod
  - configmaps
  - secrets
  - resourcequotas
  - limitranges
  verbs:
  - create
  - update
  - patch
  - delete
- apiGroups:
  - ""
  resources:
  - pod
  - configmaps
  - secrets
  - resourcequotas
  - limitranges
  verbs:
  - create
  - update
  - patch
  - delete

So we have given update permission to pod. Also kyverno resourceFilter is not filtering Pod/binding resources.

What might be wrong here as far as permission is concerned?

Expected behavior

The policy to be applied.

Screenshots

No response

Kyverno logs

No response

Slack discussion

No response

Troubleshooting

welcome[bot] commented 3 days ago

Thanks for opening your first issue here! Be sure to follow the issue template!

realshuting commented 2 days ago

You need resources (pods) when configuring clusterroles.

  resources:
  - pods