kubernetes-sigs / kind

Kubernetes IN Docker - local clusters for testing Kubernetes
https://kind.sigs.k8s.io/
Apache License 2.0
13.5k stars 1.56k forks source link

Unable to attach or mount volumes #3793

Open portellaa opened 2 days ago

portellaa commented 2 days ago

Hi πŸ‘‹

I'm having the following error when trying to launch a pod with a PVC attached with local storage. When i run the describe of the pod, i get the following error:

Unable to attach or mount volumes: unmounted volumes=[some-volume], unattached volumes=[], failed to process volumes=[some-volume]: error processing PVC <namespace>/<pvc-name>: failed to fetch PV <pv-name> from API server: persistentvolumes "<pv-name>" is forbidden: User "system:node:test-solution-worker" cannot get resource "persistentvolumes" in API group "" at the cluster scope: no relationship found between node 'test-solution-worker' and this object

My cluster is created using the following configuration:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: test-solution
nodes:
- role: control-plane
  image: kindest/node:v1.31.2@sha256:18fbefc20a7113353c7b75b5c869d7145a6abd6269154825872dc59c1329912e
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraMounts:
  - hostPath: /cluster_data
    containerPath: /mnt
- role: worker
  image: kindest/node:v1.31.2@sha256:18fbefc20a7113353c7b75b5c869d7145a6abd6269154825872dc59c1329912e
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraMounts:
  - hostPath: /cluster_data
    containerPath: /mnt
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP
- role: worker
  image: kindest/node:v1.31.2@sha256:18fbefc20a7113353c7b75b5c869d7145a6abd6269154825872dc59c1329912e
  labels:
    type: cpu-compute
  kubeadmConfigPatches:
  - |
    apiVersion: kubeadm.k8s.io/v1beta3
    kind: JoinConfiguration
    nodeRegistration:
      taints:
      - key: cpu
        value: "true"
        effect: NoSchedule
      - key: preset
        value: micro
        effect: NoSchedule
  extraMounts:
  - hostPath: /cluster_data
    containerPath: /mnt

The configuration for the local storage provisioner is:

apiVersion: v1
kind: ConfigMap
metadata:
  name: local-path-config
  namespace: local-path-storage
data:
  config.json: |-
    {
            "sharedFileSystemPath": "/mnt"
    }
  helperPod.yaml: |-
    apiVersion: v1
    kind: Pod
    metadata:
      name: helper-pod
    spec:
      containers:
      - name: helper-pod
        image: docker.io/kindest/local-path-helper:v20230510-486859a6
        imagePullPolicy: IfNotPresent
  setup: |-
    #!/bin/sh
    set -eu
    mkdir -m 7777 -p "$VOL_DIR"
  teardown: |-
    #!/bin/sh
    set -eu
    rm -rf "$VOL_DIR"

This works sometimes and even when this error occurs, sometimes it also recovers.

Does anyone has an idea why is this happening and what i have done wrong?

This is part of a product that is working in the Kubernetes systems provided by the clouds, Amazon, Azure and Google, so something must be wrong in my kind configuration. The weirdest is doesn't happen all the time.

I have checked the system:node ClusterRole and it has the permissions to perform actions in the persistentvolumes, persistentvolumesclaims, volumeattachments. I'm without ideas 😞

If anyone has an idea, please shout πŸ™

BenTheElder commented 1 day ago

I'm not sure but you have a lot going on here, can you try finding the minimum configuration where this occurs?

portellaa commented 9 hours ago

I'm suspecting that this is a manipulation that i'm doing using kyverno that is not correct.

I will update this as soon as i'm sure of that.

Thanks πŸ™