yunify / qingcloud-csi

Kubernetes volume plugin based on CSI specification which support block storage of qingcloud
Apache License 2.0
36 stars 22 forks source link

mount block volume failed #199

Closed stoneshi-yunify closed 2 years ago

stoneshi-yunify commented 2 years ago

What happened: install latest csi-qingcloud helm chart, then start a deployment:

root@test-kv:~# cat busy-deploy-block.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: busybox-test
spec:
  storageClassName: csi-qingcloud
  volumeMode: Block
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox-test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: busybox-test
  template:
    metadata:
      labels:
        app: busybox-test
    spec:
      containers:
        - name: busybox
          image: busybox:1.29
          imagePullPolicy: IfNotPresent
          command: [ "/bin/sh", "-c", "tail -f /dev/null" ]
          volumeDevices:
          - name: volume1
            devicePath: "/dev/vde"
      volumes:
      - name: volume1
        persistentVolumeClaim:
          claimName: busybox-test

pod log:

root@test-kv:~# kubectl -n ttt describe pod busybox-test-786b7958b9-f6h7l
Name:           busybox-test-786b7958b9-f6h7l
Namespace:      ttt
Priority:       0
Node:           test-kv/192.168.0.2
Start Time:     Tue, 25 Jan 2022 17:25:15 +0800
Labels:         app=busybox-test
                pod-template-hash=786b7958b9
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  ReplicaSet/busybox-test-786b7958b9
Containers:
  busybox:
    Container ID:
    Image:         busybox:1.29
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      tail -f /dev/null
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6q9h5 (ro)
    Devices:
      /dev/vde from volume1
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  volume1:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  busybox-test
    ReadOnly:   false
  kube-api-access-6q9h5:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age              From                     Message
  ----     ------                  ----             ----                     -------
  Normal   Scheduled               24s              default-scheduler        Successfully assigned ttt/busybox-test-786b7958b9-f6h7l to test-kv
  Normal   SuccessfulAttachVolume  10s              attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785"
  Warning  FailedMapVolume         1s (x3 over 4s)  kubelet                  MapVolume.MapBlockVolume failed for volume "pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785" : blkUtil.MapDevice failed. devicePath: /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785/615fbcdf-2f95-4888-89fc-91abebb6ed86, globalMapPath:/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785/dev, podUID: 615fbcdf-2f95-4888-89fc-91abebb6ed86, bindMount: true: failed to bind mount devicePath: /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785/615fbcdf-2f95-4888-89fc-91abebb6ed86 to linkPath /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785/dev/615fbcdf-2f95-4888-89fc-91abebb6ed86: mount failed: exit status 32
Mounting command: mount
Mounting arguments:  -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785/615fbcdf-2f95-4888-89fc-91abebb6ed86 /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785/dev/615fbcdf-2f95-4888-89fc-91abebb6ed86
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785/dev/615fbcdf-2f95-4888-89fc-91abebb6ed86: mount point is not a directory.

csi node log:

I0125 09:25:32.951180       1 rpcserver.go:116] GRPC call: /csi.v1.Node/NodeStageVolume
I0125 09:25:32.951217       1 rpcserver.go:117] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":1}},"volume_context":{"fsType":"ext4","replica":"2","storage.kubernetes.io/csiProvisionerIdentity":"1643102577314-8081-disk.csi.qingcloud.com"},"volume_id":"vol-9mgkkrdr"}
I0125 09:25:32.955303       1 nodeserver.go:209] *************** enter NodeStageVolume at 2022-01-25 09:25:32 hash d658ac5e ***************
I0125 09:25:32.955313       1 nodeserver.go:228] Try to lock resource vol-9mgkkrdr
I0125 09:25:33.152832       1 mount_linux.go:156] Detected OS without systemd
I0125 09:25:33.152887       1 nodeserver.go:266] Find volume vol-9mgkkrdr's device path is /dev/vdc
I0125 09:25:33.152901       1 nodeserver.go:271] Mounting vol-9mgkkrdr to /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785 format ...
I0125 09:25:33.153010       1 mount_linux.go:432] Checking for issues with fsck on disk: /dev/vdc
I0125 09:25:33.274705       1 mount_linux.go:445] `fsck` error fsck from util-linux 2.29.2
fsck.ext2: Bad magic number in super-block while trying to open /dev/vdc
/dev/vdc:
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

I0125 09:25:33.274777       1 mount_linux.go:451] Attempting to mount disk:  /dev/vdc /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785
I0125 09:25:33.274807       1 mount_linux.go:138] Mounting cmd (mount) with arguments ([-o defaults /dev/vdc /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785])
E0125 09:25:33.292942       1 mount_linux.go:143] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -o defaults /dev/vdc /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785
Output: mount: wrong fs type, bad option, bad superblock on /dev/vdc,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.

I0125 09:25:33.293127       1 mount_linux.go:506] Attempting to determine if disk "/dev/vdc" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/vdc])
I0125 09:25:33.376970       1 mount_linux.go:509] Output: "", err: exit status 2
I0125 09:25:33.377052       1 mount_linux.go:480] Disk "/dev/vdc" appears to be unformatted, attempting to format as type: "ext4" with options: [-F -m0 /dev/vdc]
I0125 09:25:35.580029       1 mount_linux.go:484] Disk successfully formatted (mkfs): ext4 - /dev/vdc /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785
I0125 09:25:35.580077       1 mount_linux.go:138] Mounting cmd (mount) with arguments ([-t ext4 -o defaults /dev/vdc /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785])
I0125 09:25:35.591995       1 nodeserver.go:275] Mount vol-9mgkkrdr to /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785 succeed
I0125 09:25:35.592045       1 nodeserver.go:276] =============== exit NodeStageVolume at 2022-01-25 09:25:32 hash d658ac5e ===============
I0125 09:25:35.592063       1 rpcserver.go:122] GRPC response: {}
I0125 09:25:35.683554       1 rpcserver.go:116] GRPC call: /csi.v1.Node/NodePublishVolume
I0125 09:25:35.683581       1 rpcserver.go:117] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785","target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785/615fbcdf-2f95-4888-89fc-91abebb6ed86","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":1}},"volume_context":{"fsType":"ext4","replica":"2","storage.kubernetes.io/csiProvisionerIdentity":"1643102577314-8081-disk.csi.qingcloud.com"},"volume_id":"vol-9mgkkrdr"}
I0125 09:25:35.686906       1 nodeserver.go:69] *************** enter NodePublishVolume at 2022-01-25 09:25:35 hash 6ada69a3 ***************
I0125 09:25:35.686915       1 nodeserver.go:95] Try to lock resource vol-9mgkkrdr
I0125 09:25:35.839865       1 nodeserver.go:148] Bind mount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785 at /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785/615fbcdf-2f95-4888-89fc-91abebb6ed86, fsType , options [bind] ...
I0125 09:25:35.839904       1 mount_linux.go:138] Mounting cmd (mount) with arguments ([-o bind /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785 /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785/615fbcdf-2f95-4888-89fc-91abebb6ed86])
I0125 09:25:35.842011       1 mount_linux.go:138] Mounting cmd (mount) with arguments ([-o bind,remount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785 /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785/615fbcdf-2f95-4888-89fc-91abebb6ed86])
I0125 09:25:35.844432       1 nodeserver.go:152] Mount bind /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785 at /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvc-8c1cdf03-1684-446a-9f9b-ab6a5bbbe785/615fbcdf-2f95-4888-89fc-91abebb6ed86 succeed
I0125 09:25:35.844456       1 nodeserver.go:153] =============== exit NodePublishVolume at 2022-01-25 09:25:35 hash 6ada69a3 ===============
dkeven commented 2 years ago

This is because of the older version of qingcloud-csi which does not support block mode volume, a new image of the supported version has been pushed to dockerhub now: https://hub.docker.com/layers/csiplugin/csi-qingcloud/1.3.1/images/sha256-041ef45452658439d98d80dab8969f2cf008343251e91e52eba71c126e733396?context=repo