kubernetes / kops

Kubernetes Operations (kOps) - Production Grade k8s Installation, Upgrades and Management
https://kops.sigs.k8s.io/
Apache License 2.0
15.88k stars 4.64k forks source link

Kube API Server will not start when specifying an audit policy config #15489

Closed danny-does-stuff closed 6 months ago

danny-does-stuff commented 1 year ago

/kind bug

1. What kops version are you running? The command kops version, will display this information.

1.26.3

2. What Kubernetes version are you running? kubectl version will print the version if a cluster is running or provide the Kubernetes version specified as a kops flag. 1.26.5

3. What cloud provider are you using? AWS

4. What commands did you run? What is the simplest way to reproduce this issue?

  1. kops edit cluster
  2. Add a policy config to kubeAPIServer.fileAssets, and the appropriate audit options to kubeAPIServer
  3. kops update cluster --yes
  4. kops rolling-update --yes

5. What happened after the commands executed? The cluster fails to update because the kube api server errors with

E0609 19:48:46.693242      10 run.go:74] "command failed" err="loading audit policy file: failed to read file path \"/etc/kubernetes/audit/policy-config.yaml\": open /etc/kubernetes/audit/policy-config.yaml: no such file or directory"

6. What did you expect to happen? The cluster works fine with the given audit policy enabled

7. Please provide your cluster manifest. Execute kops get --name my.example.com -o yaml to display your cluster manifest. You may want to remove your cluster name and other sensitive information.

Using cluster from kubectl context: services.infinid.io

apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: "2018-09-21T16:35:21Z"
  generation: 34
  name: 
spec:
  additionalPolicies:
    node: |
      [
        {
          "Effect": "Allow",
          "Action": ["sts:AssumeRole"],
          "Resource": ["*"]
        }
      ]
  api:
    dns: {}
  authorization:
    rbac: {}
  channel: stable
  cloudProvider: aws
  clusterAutoscaler:
    enabled: true
  configBase: 
  etcdClusters:
  - etcdMembers:
    - instanceGroup: master-us-east-2a
      name: a
    - instanceGroup: master-us-east-2b
      name: b
    - instanceGroup: master-us-east-2c
      name: c
    name: main
  - etcdMembers:
    - instanceGroup: master-us-east-2a
      name: a
    - instanceGroup: master-us-east-2b
      name: b
    - instanceGroup: master-us-east-2c
      name: c
    name: events
  fileAssets:
  - content: |
      apiVersion: audit.k8s.io/v1 # This is required.
      kind: Policy
      # Don't generate audit events for all requests in RequestReceived stage.
      omitStages:
        - "RequestReceived"
      rules:
        # Log pod changes at RequestResponse level
        - level: RequestResponse
          resources:
          - group: ""
            # Resource "pods" doesn't match requests to any subresource of pods,
            # which is consistent with the RBAC policy.
            resources: ["pods"]
        # Log "pods/log", "pods/status" at Metadata level
        - level: Metadata
          resources:
          - group: ""
            resources: ["pods/log", "pods/status"]

        # Don't log requests to a configmap called "controller-leader"
        - level: None
          resources:
          - group: ""
            resources: ["configmaps"]
            resourceNames: ["controller-leader"]

        # Don't log watch requests by the "system:kube-proxy" on endpoints or services
        - level: None
          users: ["system:kube-proxy"]
          verbs: ["watch"]
          resources:
          - group: "" # core API group
            resources: ["endpoints", "services"]

        # Don't log authenticated requests to certain non-resource URL paths.
        - level: None
          userGroups: ["system:authenticated"]
          nonResourceURLs:
          - "/api*" # Wildcard matching.
          - "/version"

       # Log the request body of configmap changes in kube-system.
        - level: Request
          resources:
          - group: "" # core API group
            resources: ["configmaps"]
          # This rule only applies to resources in the "kube-system" namespace.
          # The empty string "" can be used to select non-namespaced resources.
          namespaces: ["kube-system"]

        # Log configmap and secret changes in all other namespaces at the Metadata level.
        - level: Metadata
          resources:
          - group: "" # core API group
            resources: ["secrets", "configmaps"]

        # Log all other resources in core and extensions at the Request level.
        - level: Request
          resources:
          - group: "" # core API group
          - group: "extensions" # Version of group should NOT be included.

        # A catch-all rule to log all other requests at the Metadata level.
        - level: Metadata
          # Long-running requests like watches that fall under this rule will not
          # generate an audit event in RequestReceived.
          omitStages:
            - "RequestReceived"
    name: audit-policy-config
    path: /etc/kubernetes/audit/policy-config.yaml
    roles:
    - ControlPlane
  iam:
    allowContainerRegistry: true
    legacy: false
  kubeAPIServer:
    auditLogMaxAge: 10
    auditLogMaxBackups: 1
    auditLogMaxSize: 100
    auditLogPath: /var/log/kube-apiserver-audit.log
    auditPolicyFile: /etc/kubernetes/audit/policy-config.yaml
  kubelet:
    anonymousAuth: false
  kubernetesApiAccess:
  - 0.0.0.0/0
  kubernetesVersion: 1.26.5
  masterPublicName: 
  networkCIDR: 172.20.0.0/16
  networking:
    kubenet: {}
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 0.0.0.0/0
  subnets:
  - cidr: 
    name: us-east-2a
    type: Public
    zone: us-east-2a
  - cidr:
    name: us-east-2b
    type: Public
    zone: us-east-2b
  - cidr: 
    name: us-east-2c
    type: Public
    zone: us-east-2c
  topology:
    dns:
      type: Public
    masters: public
    nodes: public

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2021-05-12T20:06:51Z"
  generation: 6
  labels:
    kops.k8s.io/cluster: services.infinid.io
  name: beefy-nodes
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20230325
  machineType: m5.large
  maxSize: 8
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: beefy-nodes
  role: Node
  subnets:
  - us-east-2a
  - us-east-2b
  taints:
  - dedicated=gameservers:NoSchedule

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2019-01-03T22:59:23Z"
  generation: 4
  labels:
    kops.k8s.io/cluster: services.infinid.io
  name: db
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20230325
  machineType: m4.large
  manager: CloudGroup
  maxSize: 2
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: db
  role: Node
  rootVolumeSize: 20
  subnets:
  - us-east-2c
  taints:
  - dedicated=db:NoSchedule

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2019-12-18T18:05:16Z"
  generation: 6
  labels:
    kops.k8s.io/cluster: services.infinid.io
  name: internal
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20230325
  machineType: t2.micro
  maxSize: 2
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: internal
  role: Node
  rootVolumeSize: 10
  subnets:
  - us-east-2c

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2018-09-21T16:35:22Z"
  generation: 7
  labels:
    kops.k8s.io/cluster: services.infinid.io
  name: master-us-east-2a
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20230325
  machineType: t2.medium
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-us-east-2a
  role: Master
  rootVolumeSize: 20
  subnets:
  - us-east-2a

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2023-06-06T20:17:56Z"
  labels:
    kops.k8s.io/cluster: services.infinid.io
  name: master-us-east-2b
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20230502
  kubelet:
    anonymousAuth: false
    nodeLabels:
      kops.k8s.io/kops-controller-pki: ""
      node-role.kubernetes.io/control-plane: ""
      node.kubernetes.io/exclude-from-external-load-balancers: ""
    taints:
    - node-role.kubernetes.io/control-plane=:NoSchedule
  machineType: t3.medium
  manager: CloudGroup
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-us-east-2b
  role: Master
  subnets:
  - us-east-2b

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2023-06-06T20:18:26Z"
  labels:
    kops.k8s.io/cluster: services.infinid.io
  name: master-us-east-2c
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20230502
  kubelet:
    anonymousAuth: false
    nodeLabels:
      kops.k8s.io/kops-controller-pki: ""
      node-role.kubernetes.io/control-plane: ""
      node.kubernetes.io/exclude-from-external-load-balancers: ""
    taints:
    - node-role.kubernetes.io/control-plane=:NoSchedule
  machineType: t3.medium
  manager: CloudGroup
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-us-east-2c
  role: Master
  subnets:
  - us-east-2c

---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2021-05-13T01:31:09Z"
  generation: 3
  labels:
    kops.k8s.io/cluster: services.infinid.io
  name: temp-backup-nodes
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20230325
  machineType: t3.medium
  maxSize: 4
  minSize: 3
  nodeLabels:
    kops.k8s.io/instancegroup: temp-backup-nodes
  role: Node
  rootVolumeSize: 20
  subnets:
  - us-east-2a
  - us-east-2b

8. Please run the commands with most verbose logging by adding the -v 10 flag. Paste the logs into this report, or in a gist and provide the gist link here. Not Applicable

9. Anything else do we need to know?

I can confirm that the files were added to the node, but I am not sure why the api server is not picking it up

hakman commented 1 year ago

@danny-does-stuff Could you try adding mode: "0444" to the file asset? https://kops.sigs.k8s.io/cluster_spec/#mode

Branrir commented 1 year ago

Hi, can confirm the behaviour as well. Tried to set mode as advised as well. It seems the file is getting created, but is not added to VolumeMounts in the manifest of the kube-apiserver. Cluster spec:

apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: "2023-07-20T13:16:44Z"
  generation: 1
  name: dev.1690-audit.k8s.local
spec:
  addons:
    - ...
  api:
    loadBalancer:
      type: Public
  authorization:
    rbac: {}
  certManager:
    enabled: false
  channel: stable
  cloudConfig:
    openstack:
      blockStorage:
        bs-version: v2
        createStorageClass: false
        ignore-volume-az: true
        override-volume-az: nova
      loadbalancer:
        floatingNetwork: public
        floatingNetworkID: 91371e55-9cc1-4ed0-bbdc-a7476669b4bd
        manageSecurityGroups: true
        method: ROUND_ROBIN
        provider: haproxy
        useOctavia: false
      monitor:
        delay: 1m
        maxRetries: 3
        timeout: 30s
      router:
        externalNetwork: public
  cloudControllerManager:
    clusterName: dev.1690-audit.k8s.local
    image: k8scloudprovider/openstack-cloud-controller-manager:v1.25.6
  cloudProvider: openstack
  configBase: ...
  containerRuntime: containerd
  containerd:
    registryMirrors:
      docker.io:
      - ...
  etcdClusters:
  - cpuRequest: 200m
    etcdMembers:
    - instanceGroup: master-zone-01
      name: etcd-zone-01
      volumeSize: 2
      volumeType: fast-1000
    - instanceGroup: master-zone-02
      name: etcd-zone-02
      volumeSize: 2
      volumeType: fast-1000
    - instanceGroup: master-zone-03
      name: etcd-zone-03
      volumeSize: 2
      volumeType: fast-1000
    manager:
      env:
      - name: ETCD_MANAGER_HOURLY_BACKUPS_RETENTION
        value: 7d
      - name: ETCD_MANAGER_DAILY_BACKUPS_RETENTION
        value: 14d
    memoryRequest: 100Mi
    name: main
    provider: Manager
  - cpuRequest: 100m
    etcdMembers:
    - instanceGroup: master-zone-01
      name: etcd-zone-01
      volumeSize: 2
      volumeType: fast-1000
    - instanceGroup: master-zone-02
      name: etcd-zone-02
      volumeSize: 2
      volumeType: fast-1000
    - instanceGroup: master-zone-03
      name: etcd-zone-03
      volumeSize: 2
      volumeType: fast-1000
    manager:
      env:
      - name: ETCD_MANAGER_HOURLY_BACKUPS_RETENTION
        value: 7d
      - name: ETCD_MANAGER_DAILY_BACKUPS_RETENTION
        value: 14d
    memoryRequest: 100Mi
    name: events
    provider: Manager
  fileAssets:
  - content: |
      apiVersion: audit.k8s.io/v1
      kind: Policy
      rules:
      - level: Metadata
    mode: "0444"
    name: audit-policy-config
    path: /etc/kubernetes/audit/policy-config.yaml
    roles:
    - ControlPlane
  iam:
    allowContainerRegistry: true
    legacy: false
  kubeAPIServer:
    allowPrivileged: true
    auditLogMaxAge: 10
    auditLogMaxBackups: 1
    auditLogMaxSize: 100
    auditLogPath: /var/log/kube-apiserver-audit.log
    auditPolicyFile: /etc/kubernetes/audit/policy-config.yaml
    oidcClientID: kubernetes
    oidcGroupsClaim: groups
    oidcIssuerURL: ...
    oidcUsernameClaim: email
  kubeProxy:
    metricsBindAddress: 0.0.0.0
  kubelet:
    anonymousAuth: false
    authenticationTokenWebhook: true
    authorizationMode: Webhook
  kubernetesApiAccess:
  - 0.0.0.0/0
  kubernetesVersion: 1.25.10
  masterPublicName: api.dev.1690-audit.k8s.local
  metricsServer:
    enabled: true
    insecure: true
  networkCIDR: 10.0.0.0/20
  networking:
    cilium: {}
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 0.0.0.0/0
  sshKeyName: dev.1690-audit.k8s.local
  subnets:
  - cidr: ...
    name: zone01
    type: Private
    zone: local_zone_01
  - cidr: ...
    name: zone02
    type: Private
    zone: local_zone_02
  - cidr: ...
    name: zone03
    type: Private
    zone: local_zone_03
  - cidr: ...
    name: utility-zone01
    type: Utility
    zone: local_zone_01
  topology:
    bastion:
      bastionPublicName: bastion.dev.1690-audit.k8s.local
    dns:
      type: Private
    masters: private
    nodes: private

Here the manifest for the kube-apiserver that is getting created in /etc/kubernetes/manifests/:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    dns.alpha.kubernetes.io/internal: api.internal.dev.1690-audit.k8s.local
    kubectl.kubernetes.io/default-container: kube-apiserver
  creationTimestamp: null
  labels:
    k8s-app: kube-apiserver
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - args:
    - --log-file=/var/log/kube-apiserver.log
    - --also-stdout
    - /usr/local/bin/kube-apiserver
    - --allow-privileged=true
    - --anonymous-auth=false
    - --api-audiences=kubernetes.svc.default
    - --apiserver-count=3
    - --audit-log-maxage=10
    - --audit-log-maxbackup=1
    - --audit-log-maxsize=100
    - --audit-log-path=/var/log/kube-apiserver-audit.log
    - --audit-policy-file=/etc/kubernetes/audit/policy-config.yaml
    - --authorization-mode=Node,RBAC
    - --bind-address=0.0.0.0
    - --client-ca-file=/srv/kubernetes/ca.crt
    - --cloud-config=/etc/kubernetes/in-tree-cloud.config
    - --cloud-provider=external
    - --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,ResourceQuota
    - --enable-aggregator-routing=true
    - --etcd-cafile=/srv/kubernetes/kube-apiserver/etcd-ca.crt
    - --etcd-certfile=/srv/kubernetes/kube-apiserver/etcd-client.crt
    - --etcd-keyfile=/srv/kubernetes/kube-apiserver/etcd-client.key
    - --etcd-servers-overrides=/events#https://127.0.0.1:4002
    - --etcd-servers=https://127.0.0.1:4001
    - --kubelet-client-certificate=/srv/kubernetes/kube-apiserver/kubelet-api.crt
    - --kubelet-client-key=/srv/kubernetes/kube-apiserver/kubelet-api.key
    - --kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP
    - --oidc-client-id=kubernetes
    - --oidc-groups-claim=groups
    - --oidc-issuer-url=....
    - --oidc-username-claim=email
    - --proxy-client-cert-file=/srv/kubernetes/kube-apiserver/apiserver-aggregator.crt
    - --proxy-client-key-file=/srv/kubernetes/kube-apiserver/apiserver-aggregator.key
    - --requestheader-allowed-names=aggregator
    - --requestheader-client-ca-file=/srv/kubernetes/kube-apiserver/apiserver-aggregator-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=443
    - --service-account-issuer=https://api.internal.dev.1690-audit.k8s.local
    - --service-account-jwks-uri=https://api.internal.dev.1690-audit.k8s.local/openid/v1/jwks
    - --service-account-key-file=/srv/kubernetes/kube-apiserver/service-account.pub
    - --service-account-signing-key-file=/srv/kubernetes/kube-apiserver/service-account.key
    - --service-cluster-ip-range=...
    - --storage-backend=etcd3
    - --tls-cert-file=/srv/kubernetes/kube-apiserver/server.crt
    - --tls-private-key-file=/srv/kubernetes/kube-apiserver/server.key
    - --v=2
    command:
    - /go-runner
    image: registry.k8s.io/kube-apiserver:v1.25.10@sha256:ccce3b0e4b288635f642c73a9a847ed67858e86c5afe37fc775887821aa3cd9e
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 3990
      initialDelaySeconds: 45
      timeoutSeconds: 15
    name: kube-apiserver
    ports:
    - containerPort: 443
      hostPort: 443
      name: https
    resources:
      requests:
        cpu: 150m
    volumeMounts:
    - mountPath: /var/log/kube-apiserver.log
      name: logfile
    - mountPath: /etc/ssl
      name: etcssl
      readOnly: true
    - mountPath: /etc/pki/tls
      name: etcpkitls
      readOnly: true
    - mountPath: /etc/pki/ca-trust
      name: etcpkica-trust
      readOnly: true
    - mountPath: /usr/share/ssl
      name: usrsharessl
      readOnly: true
    - mountPath: /usr/ssl
      name: usrssl
      readOnly: true
    - mountPath: /usr/lib/ssl
      name: usrlibssl
      readOnly: true
    - mountPath: /usr/local/openssl
      name: usrlocalopenssl
      readOnly: true
    - mountPath: /var/ssl
      name: varssl
      readOnly: true
    - mountPath: /etc/openssl
      name: etcopenssl
      readOnly: true
    - mountPath: /etc/kubernetes/in-tree-cloud.config
      name: cloudconfig
      readOnly: true
    - mountPath: /srv/kubernetes/ca.crt
      name: kubernetesca
      readOnly: true
    - mountPath: /srv/kubernetes/kube-apiserver
      name: srvkapi
      readOnly: true
    - mountPath: /srv/sshproxy
      name: srvsshproxy
      readOnly: true
    - mountPath: /var/log
      name: auditlogpathdir
  - args:
    - --ca-cert=/secrets/ca.crt
    - --client-cert=/secrets/client.crt
    - --client-key=/secrets/client.key
    image: registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.4@sha256:db9f17c1c8b2dfc081e62138f8dcba0a882264f4a95da13e0226af53a45e50dc
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        path: /.kube-apiserver-healthcheck/healthz
        port: 3990
      initialDelaySeconds: 5
      timeoutSeconds: 5
    name: healthcheck
    resources: {}
    securityContext:
      runAsNonRoot: true
      runAsUser: 10012
    volumeMounts:
    - mountPath: /secrets
      name: healthcheck-secrets
      readOnly: true
  hostNetwork: true
  priorityClassName: system-cluster-critical
  tolerations:
  - key: CriticalAddonsOnly
    operator: Exists
  volumes:
  - hostPath:
      path: /var/log/kube-apiserver.log
    name: logfile
  - hostPath:
      path: /etc/ssl
    name: etcssl
  - hostPath:
      path: /etc/pki/tls
    name: etcpkitls
  - hostPath:
      path: /etc/pki/ca-trust
    name: etcpkica-trust
  - hostPath:
      path: /usr/share/ssl
    name: usrsharessl
  - hostPath:
      path: /usr/ssl
    name: usrssl
  - hostPath:
      path: /usr/lib/ssl
    name: usrlibssl
  - hostPath:
      path: /usr/local/openssl
    name: usrlocalopenssl
  - hostPath:
      path: /var/ssl
    name: varssl
  - hostPath:
      path: /etc/openssl
    name: etcopenssl
  - hostPath:
      path: /etc/kubernetes/in-tree-cloud.config
    name: cloudconfig
  - hostPath:
      path: /srv/kubernetes/ca.crt
    name: kubernetesca
  - hostPath:
      path: /srv/kubernetes/kube-apiserver
    name: srvkapi
  - hostPath:
      path: /srv/sshproxy
    name: srvsshproxy
  - hostPath:
      path: /var/log
    name: auditlogpathdir
  - hostPath:
      path: /etc/kubernetes/kube-apiserver-healthcheck/secrets
      type: Directory
    name: healthcheck-secrets
status: {}

Possible workaround could be to move the policy-config.yaml file to the /srv/kubernetes/kube-apiserver as this is already defined in the manifest. Following change in cluster-spec helped:

...
fileAssets:
  - content: |
      apiVersion: audit.k8s.io/v1
      kind: Policy
      rules:
      - level: Metadata
    mode: "0444"
    name: audit-policy-config
    path: /srv/kubernetes/kube-apiserver/policy-config.yaml
    roles:
    - ControlPlane
  kubeAPIServer:
    allowPrivileged: true
    auditLogMaxAge: 10
    auditLogMaxBackups: 1
    auditLogMaxSize: 100
    auditLogPath: /var/log/kube-apiserver-audit.log
    auditPolicyFile: /srv/kubernetes/kube-apiserver/policy-config.yaml
    ...
ndallavalentina commented 1 year ago

Is there any update on this please? I've got the same issue after I upgraded from kops 1.25.2 to 1.26.5 - k8s 1.26.7 the control plane is not joining the cluster. I checked and the container kube-api exited because cannot find the audit-policy specified via fileAssets (the file is not created by kops). In order to fix I changed the permission with mode: "0544" I tried with 0644 but it was not working

  fileAssets:
  - name: audit-policy
    path: /srv/kubernetes/kube-apiserver/audit.yaml
    mode: "0544"
    roles: [ControlPlane]
    content: |
      apiVersion: audit.k8s.io/v1
      kind: Policy

not sure why it's working by assigning execute permission to audit.yaml

-r-xr--r--. 1 root root  317 Aug 16 06:55 audit.yaml
-rw-------. 1 root root  228 Aug 16 06:55 encryptionconfig.yaml
-rw-r--r--. 1 root root 1054 Aug 16 06:55 etcd-ca.crt
-rw-r--r--. 1 root root 1082 Aug 16 06:55 etcd-client.crt
-rw-------. 1 root root 1675 Aug 16 06:55 etcd-client.key
djablonski-moia commented 1 year ago

I ran into the same issue, but luckily also found #15488, which solved the problem for me :smile:

k8s-triage-robot commented 8 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 7 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 6 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 6 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/kops/issues/15489#issuecomment-2027726797): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.