oVirt / ovirt-openshift-extensions

Implementation of flexvolume driver and provisioner for oVirt
Apache License 2.0
31 stars 16 forks source link

[fsType:xfs] kubernetes (FailedMount) - MountVolume.MountDevice failed - invalid character 'e' looking for beginning of value #148

Closed gpastuszko closed 2 years ago

gpastuszko commented 4 years ago

After following the Kubernetes Guide the disk is created by oVirt and attached to the node (VM) where the pod which mounts the claim is deployed. However, the volume is failing to mount when in StorageClass fsType set to xfs

Warning  FailedMount 34s (x8 over 99s)  kubelet, gn3.stage.local  MountVolume.MountDevice failed for volume "pvc-4dec714e-1418-4318-8b27-db0ff8687fb7" : invalid character 'e' looking for beginning of value

Logs on the node showing following errors:

Failed to unmarshal output for command: mountdevice, output: "exit status 1 mkfs failed with exit status 1", error: invalid character 'e' looking for beginning of value

FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver, args: [mountdevice /var/lib/kubelet/plugins/kubernetes.io/flexvolume/ovirt/ovirt-flexvolume-driver/mounts/pvc-4dec714e-1418-4318-8b27-db0ff8687fb7 {"kubernetes.io/fsType":"xfs","kubernetes.io/pvOrVolumeName":"pvc-4dec714e-1418-4318-8b27-db0ff8687fb7","kubernetes.io/readwrite":"rw"}], error: exit status 1, output: "exit status 1 mkfs failed with exit status 1"

Excerpt of the node logs:

Nov 18 19:15:42 gn3 kubelet: W1118 19:15:42.962798     993 plugin-defaults.go:32] flexVolume driver ovirt/ovirt-flexvolume-driver: using default GetVolumeName for volume pvc-4dec714e-1418-4318-8b27-db0ff8687fb7
Nov 18 19:17:18 gn3 kubelet: I1118 19:17:18.052610     993 operation_generator.go:629] MountVolume.WaitForAttach entering for volume "pvc-4dec714e-1418-4318-8b27-db0ff8687fb7" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-4dec714e-1418-4318-8b27-db0ff8687fb7") pod "test-flex" (UID: "eaac5cb3-0091-415b-8acb-fe945a72fd14") DevicePath "/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3c2a386d-00bd-4689-8"
Nov 18 19:17:18 gn3 /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver[30859]: invoking with args [/usr/libexec/kubernetes/kubelet-plugins/volume/exec/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver waitforattach /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3c2a386d-00bd-4689-8 {"kubernetes.io/fsType":"xfs","kubernetes.io/pvOrVolumeName":"pvc-4dec714e-1418-4318-8b27-db0ff8687fb7","kubernetes.io/readwrite":"rw"}]
Nov 18 19:17:18 gn3 ovirt-api[30859]: calling ovirt api url: https://hosted-engine.local/ovirt-engine/api/vms/A1B2C3D4-1234-ABCD-AA00-ABCDFG123456
Nov 18 19:17:18 gn3 ovirt-api[30859]: calling ovirt api url: https://hosted-engine.local/ovirt-engine/api/vms/a1b2c3d4-1234-abcd-aa00-abcdfg123456/diskattachments/
Nov 18 19:17:18 gn3 /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver[30859]: invoking with args [/usr/libexec/kubernetes/kubelet-plugins/volume/exec/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver waitforattach /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3c2a386d-00bd-4689-8 {"kubernetes.io/fsType":"xfs","kubernetes.io/pvOrVolumeName":"pvc-4dec714e-1418-4318-8b27-db0ff8687fb7","kubernetes.io/readwrite":"rw"}]
Nov 18 19:17:18 gn3 kubelet: I1118 19:17:18.191656     993 operation_generator.go:638] MountVolume.WaitForAttach succeeded for volume "pvc-4dec714e-1418-4318-8b27-db0ff8687fb7" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-4dec714e-1418-4318-8b27-db0ff8687fb7") pod "test-flex" (UID: "eaac5cb3-0091-415b-8acb-fe945a72fd14") DevicePath ""
Nov 18 19:17:18 gn3 kubelet: W1118 19:17:18.191705     993 plugin-defaults.go:32] flexVolume driver ovirt/ovirt-flexvolume-driver: using default GetVolumeName for volume pvc-4dec714e-1418-4318-8b27-db0ff8687fb7
Nov 18 19:17:18 gn3 /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver[30866]: invoking with args [/usr/libexec/kubernetes/kubelet-plugins/volume/exec/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver mountdevice /var/lib/kubelet/plugins/kubernetes.io/flexvolume/ovirt/ovirt-flexvolume-driver/mounts/pvc-4dec714e-1418-4318-8b27-db0ff8687fb7  {"kubernetes.io/fsType":"xfs","kubernetes.io/pvOrVolumeName":"pvc-4dec714e-1418-4318-8b27-db0ff8687fb7","kubernetes.io/readwrite":"rw"}]
Nov 18 19:17:18 gn3 ovirt-api[30866]: calling ovirt api url: https://hosted-engine.local/ovirt-engine/api/vms/A1B2C3D4-1234-ABCD-AA00-ABCDFG123456
Nov 18 19:17:18 gn3 ovirt-api[30866]: calling ovirt api url: https://hosted-engine.local/ovirt-engine/api/disks?search=name=pvc-4dec714e-1418-4318-8b27-db0ff8687fb7
Nov 18 19:17:18 gn3 ovirt-api[30866]: calling ovirt api url: https://hosted-engine.local/ovirt-engine/api/vms/a1b2c3d4-1234-abcd-aa00-abcdfg123456/diskattachments/3c2a386d-00bd-4689-89ad-d753fb37aa8b
Nov 18 19:17:18 gn3 kubelet: E1118 19:17:18.340608     993 driver-call.go:267] Failed to unmarshal output for command: mountdevice, output: "exit status 1 mkfs failed with exit status 1", error: invalid character 'e' looking for beginning of value
Nov 18 19:17:18 gn3 kubelet: W1118 19:17:18.340634     993 driver-call.go:150] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver, args: [mountdevice /var/lib/kubelet/plugins/kubernetes.io/flexvolume/ovirt/ovirt-flexvolume-driver/mounts/pvc-4dec714e-1418-4318-8b27-db0ff8687fb7  {"kubernetes.io/fsType":"xfs","kubernetes.io/pvOrVolumeName":"pvc-4dec714e-1418-4318-8b27-db0ff8687fb7","kubernetes.io/readwrite":"rw"}], error: exit status 1, output: "exit status 1 mkfs failed with exit status 1"
Nov 18 19:17:18 gn3 kubelet: E1118 19:17:18.340718     993 nestedpendingoperations.go:270] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-4dec714e-1418-4318-8b27-db0ff8687fb7\"" failed. No retries permitted until 2019-11-18 19:19:20.340696175 +0100 CET m=+30255.719606156 (durationBeforeRetry 2m2s). Error: "MountVolume.MountDevice failed for volume \"pvc-4dec714e-1418-4318-8b27-db0ff8687fb7\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-4dec714e-1418-4318-8b27-db0ff8687fb7\") pod \"test-flex\" (UID: \"eaac5cb3-0091-415b-8acb-fe945a72fd14\") : invalid character 'e' looking for beginning of value"
Nov 18 19:17:45 gn3 kubelet: E1118 19:17:45.956180     993 kubelet.go:1669] Unable to mount volumes for pod "test-flex_default(eaac5cb3-0091-415b-8acb-fe945a72fd14)": timeout expired waiting for volumes to attach or mount for pod "default"/"test-flex". list of unmounted volumes=[test01]. list of unattached volumes=[test01 default-token-jxtpr]; skipping pod
Nov 18 19:17:45 gn3 kubelet: E1118 19:17:45.956216     993 pod_workers.go:190] Error syncing pod eaac5cb3-0091-415b-8acb-fe945a72fd14 ("test-flex_default(eaac5cb3-0091-415b-8acb-fe945a72fd14)"), skipping: timeout expired waiting for volumes to attach or mount for pod "default"/"test-flex". list of unmounted volumes=[test01]. list of unattached volumes=[test01 default-token-jxtpr]
Nov 18 19:17:58 gn3 kubelet: W1118 19:17:58.005438     993 plugin-defaults.go:32] flexVolume driver ovirt/ovirt-flexvolume-driver: using default GetVolumeName for volume pvc-4dec714e-1418-4318-8b27-db0ff8687fb7

kubectl describe pod test-felx

Name:         test-flex
Namespace:    default
Priority:     0
Node:         gn3.stage.local/192.168.50.144
Start Time:   Mon, 18 Nov 2019 19:08:57 +0100
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"test-flex","namespace":"default"},"spec":{"containers":[{"image":"php...
Status:       Pending
IP:           
Containers:
  test-flex:
    Container ID:   
    Image:          php:7-apache
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /opt from test01 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-jxtpr (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  test01:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  1g-ovirt-disk
    ReadOnly:   false
  default-token-jxtpr:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-jxtpr
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                From                          Message
  ----     ------                  ----               ----                          -------
  Normal   Scheduled               105s               default-scheduler             Successfully assigned default/test-flex to gn3.stage.local
  Normal   SuccessfulAttachVolume  104s               attachdetach-controller       AttachVolume.Attach succeeded for volume "pvc-4dec714e-1418-4318-8b27-db0ff8687fb7"
  Warning  FailedMount             34s (x8 over 99s)  kubelet, gn3.stage.local  MountVolume.MountDevice failed for volume "pvc-4dec714e-1418-4318-8b27-db0ff8687fb7" : invalid character 'e' looking for beginning of value

I have no idea how to troubleshoot this any further. Everything else seem to be fine. Disk is created and attached but fails to mount when fsType is set to xfs in StorageClass

With ext4 evertyhing works as expected.

rgolangh commented 4 years ago

On Mon, 18 Nov 2019 at 20:40, gpastuszko notifications@github.com wrote:

After following the Kubernetes Guide https://github.com/oVirt/ovirt-openshift-extensions/blob/master/docs/Deploying-On-Kubernetes.md the disk is created by oVirt and attached to the node (VM) where the pod which mounts the claim is deployed. However, the volume is failing to mount

Warning FailedMount 34s (x8 over 99s) kubelet, gn3.stage.local MountVolume.MountDevice failed for volume "pvc-4dec714e-1418-4318-8b27-db0ff8687fb7" : invalid character 'e' looking for beginning of value

Logs on the node showing following errors:

Failed to unmarshal output for command: mountdevice, output: "exit status 1 mkfs failed with exit status 1", error: invalid character 'e' looking for beginning of value

FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver, args: [mountdevice /var/lib/kubelet/plugins/kubernetes.io/flexvolume/ovirt/ovirt-flexvolume-driver/mounts/pvc-4dec714e-1418-4318-8b27-db0ff8687fb7 http://kubernetes.io/flexvolume/ovirt/ovirt-flexvolume-driver/mounts/pvc-4dec714e-1418-4318-8b27-db0ff8687fb7 {"kubernetes.io/fsType http://kubernetes.io/fsType":"xfs","kubernetes.io/pvOrVolumeName http://kubernetes.io/pvOrVolumeName":"pvc-4dec714e-1418-4318-8b27-db0ff8687fb7","kubernetes.io/readwrite http://kubernetes.io/readwrite":"rw"}], error: exit status 1, output: "exit status 1 mkfs failed with exit status 1"

Excerpt of the node logs:

Nov 18 19:15:42 gn3 kubelet: W1118 19:15:42.962798 993 plugin-defaults.go:32] flexVolume driver ovirt/ovirt-flexvolume-driver: using default GetVolumeName for volume pvc-4dec714e-1418-4318-8b27-db0ff8687fb7 Nov 18 19:17:18 gn3 kubelet: I1118 19:17:18.052610 993 operation_generator.go:629] MountVolume.WaitForAttach entering for volume "pvc-4dec714e-1418-4318-8b27-db0ff8687fb7" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-4dec714e-1418-4318-8b27-db0ff8687fb7") pod "test-flex" (UID: "eaac5cb3-0091-415b-8acb-fe945a72fd14") DevicePath "/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3c2a386d-00bd-4689-8" Nov 18 19:17:18 gn3 /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver[30859]: invoking with args [/usr/libexec/kubernetes/kubelet-plugins/volume/exec/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver waitforattach /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3c2a386d-00bd-4689-8 {"kubernetes.io/fsType":"xfs","kubernetes.io/pvOrVolumeName":"pvc-4dec714e-1418-4318-8b27-db0ff8687fb7","kubernetes.io/readwrite":"rw"}] Nov 18 19:17:18 gn3 ovirt-api[30859]: calling ovirt api url: https://hosted-engine.local/ovirt-engine/api/vms/A1B2C3D4-1234-ABCD-AA00-ABCDFG123456 Nov 18 19:17:18 gn3 ovirt-api[30859]: calling ovirt api url: https://hosted-engine.local/ovirt-engine/api/vms/a1b2c3d4-1234-abcd-aa00-abcdfg123456/diskattachments/ Nov 18 19:17:18 gn3 /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver[30859]: invoking with args [/usr/libexec/kubernetes/kubelet-plugins/volume/exec/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver waitforattach /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3c2a386d-00bd-4689-8 {"kubernetes.io/fsType":"xfs","kubernetes.io/pvOrVolumeName":"pvc-4dec714e-1418-4318-8b27-db0ff8687fb7","kubernetes.io/readwrite":"rw"}] Nov 18 19:17:18 gn3 kubelet: I1118 19:17:18.191656 993 operation_generator.go:638] MountVolume.WaitForAttach succeeded for volume "pvc-4dec714e-1418-4318-8b27-db0ff8687fb7" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-4dec714e-1418-4318-8b27-db0ff8687fb7") pod "test-flex" (UID: "eaac5cb3-0091-415b-8acb-fe945a72fd14") DevicePath "" Nov 18 19:17:18 gn3 kubelet: W1118 19:17:18.191705 993 plugin-defaults.go:32] flexVolume driver ovirt/ovirt-flexvolume-driver: using default GetVolumeName for volume pvc-4dec714e-1418-4318-8b27-db0ff8687fb7 Nov 18 19:17:18 gn3 /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver[30866]: invoking with args [/usr/libexec/kubernetes/kubelet-plugins/volume/exec/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver mountdevice /var/lib/kubelet/plugins/kubernetes.io/flexvolume/ovirt/ovirt-flexvolume-driver/mounts/pvc-4dec714e-1418-4318-8b27-db0ff8687fb7 {"kubernetes.io/fsType":"xfs","kubernetes.io/pvOrVolumeName":"pvc-4dec714e-1418-4318-8b27-db0ff8687fb7","kubernetes.io/readwrite":"rw"}] Nov 18 19:17:18 gn3 ovirt-api[30866]: calling ovirt api url: https://hosted-engine.local/ovirt-engine/api/vms/A1B2C3D4-1234-ABCD-AA00-ABCDFG123456 Nov 18 19:17:18 gn3 ovirt-api[30866]: calling ovirt api url: https://hosted-engine.local/ovirt-engine/api/disks?search=name=pvc-4dec714e-1418-4318-8b27-db0ff8687fb7 Nov 18 19:17:18 gn3 ovirt-api[30866]: calling ovirt api url: https://hosted-engine.local/ovirt-engine/api/vms/a1b2c3d4-1234-abcd-aa00-abcdfg123456/diskattachments/3c2a386d-00bd-4689-89ad-d753fb37aa8b Nov 18 19:17:18 gn3 kubelet: E1118 19:17:18.340608 993 driver-call.go:267] Failed to unmarshal output for command: mountdevice, output: "exit status 1 mkfs failed with exit status 1", error: invalid character 'e' looking for beginning of value Nov 18 19:17:18 gn3 kubelet: W1118 19:17:18.340634 993 driver-call.go:150] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver, args: [mountdevice /var/lib/kubelet/plugins/kubernetes.io/flexvolume/ovirt/ovirt-flexvolume-driver/mounts/pvc-4dec714e-1418-4318-8b27-db0ff8687fb7 {"kubernetes.io/fsType":"xfs","kubernetes.io/pvOrVolumeName":"pvc-4dec714e-1418-4318-8b27-db0ff8687fb7","kubernetes.io/readwrite":"rw"}], error: exit status 1, output: "exit status 1 mkfs failed with exit status 1" Nov 18 19:17:18 gn3 kubelet: E1118 19:17:18.340718 993 nestedpendingoperations.go:270] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-4dec714e-1418-4318-8b27-db0ff8687fb7\"" failed. No retries permitted until 2019-11-18 19:19:20.340696175 +0100 CET m=+30255.719606156 (durationBeforeRetry 2m2s). Error: "MountVolume.MountDevice failed for volume \"pvc-4dec714e-1418-4318-8b27-db0ff8687fb7\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-4dec714e-1418-4318-8b27-db0ff8687fb7\") pod \"test-flex\" (UID: \"eaac5cb3-0091-415b-8acb-fe945a72fd14\") : invalid character 'e' looking for beginning of value" Nov 18 19:17:45 gn3 kubelet: E1118 19:17:45.956180 993 kubelet.go:1669] Unable to mount volumes for pod "test-flex_default(eaac5cb3-0091-415b-8acb-fe945a72fd14)": timeout expired waiting for volumes to attach or mount for pod "default"/"test-flex". list of unmounted volumes=[test01]. list of unattached volumes=[test01 default-token-jxtpr]; skipping pod Nov 18 19:17:45 gn3 kubelet: E1118 19:17:45.956216 993 pod_workers.go:190] Error syncing pod eaac5cb3-0091-415b-8acb-fe945a72fd14 ("test-flex_default(eaac5cb3-0091-415b-8acb-fe945a72fd14)"), skipping: timeout expired waiting for volumes to attach or mount for pod "default"/"test-flex". list of unmounted volumes=[test01]. list of unattached volumes=[test01 default-token-jxtpr] Nov 18 19:17:58 gn3 kubelet: W1118 19:17:58.005438 993 plugin-defaults.go:32] flexVolume driver ovirt/ovirt-flexvolume-driver: using default GetVolumeName for volume pvc-4dec714e-1418-4318-8b27-db0ff8687fb7

kubectl describe pod test-felx

Name: test-flex Namespace: default Priority: 0 Node: gn3.stage.local/192.168.50.144 Start Time: Mon, 18 Nov 2019 19:08:57 +0100 Labels: Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"test-flex","namespace":"default"},"spec":{"containers":[{"image":"php... Status: Pending IP: Containers: test-flex: Container ID: Image: php:7-apache Image ID: Port: Host Port: State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Environment: Mounts: /opt from test01 (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-jxtpr (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: test01: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: 1g-ovirt-disk ReadOnly: false default-token-jxtpr: Type: Secret (a volume populated by a Secret) SecretName: default-token-jxtpr Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message


Normal Scheduled 105s default-scheduler Successfully assigned default/test-flex to gn3.stage.local Normal SuccessfulAttachVolume 104s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-4dec714e-1418-4318-8b27-db0ff8687fb7" Warning FailedMount 34s (x8 over 99s) kubelet, gn3.stage.local MountVolume.MountDevice failed for volume "pvc-4dec714e-1418-4318-8b27-db0ff8687fb7" : invalid character 'e' looking for beginning of value

  • SELinux is in permissive mode
  • mkfs.xfs are present
  • Kubernetes 1.15.4
  • CentOS 7.7
  • oVirt 4.2

I have no idea how to troubleshoot this any further. Everything else seem to be fine. Disk is created and attached but fails to mount

Can you try to mkfs it manually on the node and see what's failing?

You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/oVirt/ovirt-openshift-extensions/issues/148?email_source=notifications&email_token=ABGBYHFDQY5M2HRAMKRONETQULOSJA5CNFSM4JOYMYGKYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4H2D6EEQ, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGBYHANA7FQ3HRD4RO7Z2DQULOSJANCNFSM4JOYMYGA .

gpastuszko commented 4 years ago

In order to try the mkfs on the node, the pod had to be deleted, and the disk attached manually in ovirt to the node

mkfs -t xfs /dev/sdb

mkfs.xfs: /dev/sdb appears to contain an existing filesystem (ext4).
mkfs.xfs: Use the -f option to force overwrite.

mkfs -F -t xfs /dev/sdb

mke2fs 1.42.9 (28-Dec-2013)

Your mke2fs.conf file does not define the xfs filesystem type.
Discarding device blocks: done                            
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
65536 inodes, 262144 blocks
13107 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376

Allocating group tables: done                            
Writing inode tables: done                            
Writing superblocks and filesystem accounting information: done

resulting in created ext2 filesystem

I checked other systems and none of our systems had a xfs entry in /etc/mke2fs.conf. (centos, ubuntu)

We usualy create xfs by mkfs.xfs

mkfs.xfs -f /dev/sdb was successful

rgolangh commented 4 years ago

On Wed, 20 Nov 2019 at 15:25, gpastuszko notifications@github.com wrote:

In order to try the mkfs on the node, the pod had to be deleted, and the disk attached manually in ovirt to the node

mkfs -t xfs /dev/sdb

mkfs.xfs: /dev/sdb appears to contain an existing filesystem (ext4). mkfs.xfs: Use the -f option to force overwrite.

mkfs -F -t xfs /dev/sdb

So what would be the expected behaviour? to forcefully create the fs? I need to gather some more info on it to understand how other implementation do that. I think that since this is risky operation it maybe okay to guard you against this kind of actions and demand your disk will be clean.

side note: the current behaviour I think assumes that if a filesystem of the desired type already exists then no mkfs will be attempted. I guess it didn't trigger because the former fs was != xfs. Maybe you should check 'wipe-after-delete' in oVirt so the disk will be really clean.

mke2fs 1.42.9 (28-Dec-2013)

Your mke2fs.conf file does not define the xfs filesystem type. Discarding device blocks: done Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 65536 inodes, 262144 blocks 13107 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=268435456 8 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376

Allocating group tables: done Writing inode tables: done Writing superblocks and filesystem accounting information: done

resulting in created ext2 filesystem

I checked other systems and none of our systems had a xfs entry in /etc/mke2fs.conf. (centos, ubuntu)

We usualy create xfs by mkfs.xfs

mkfs.xfs -f /dev/sdb was successful

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/oVirt/ovirt-openshift-extensions/issues/148?email_source=notifications&email_token=ABGBYHBFUDOWAMSPPBYAE4LQUU3EDA5CNFSM4JOYMYGKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEER6W6I#issuecomment-556002169, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGBYHHCJVEQ3RTREAWUCU3QUU3EDANCNFSM4JOYMYGA .

gpastuszko commented 4 years ago

Thanks for you feedback.

Just to recap what the problem is:


In my prevoius post i tried to mkfs on a disk already used by pod, the disk contained a filesystem already (ext4), since that is what I specified in the storageclass. Understandably the mkfs on the node was not successful. The message Your mke2fs.conf file does not define the xfs filesystem type was little misleading.

What I did now was:

As you see, there is no error on the node, however when I change the fsType: to xfs in the storageclass, the flexvolume-driver is failing to create filesystem (on a new claim, new disk ) throwing the same errors quoted in the first post

For the moment storage classes with fsType: ext4 works well. Thank you.

However, if I can be of any help to provide more info in order to troubleshoot and fix the xfs issue , i'm happy to do so. Just don't know what other info I could provide.

rgolangh commented 4 years ago

On Thu, 21 Nov 2019 at 11:27, gpastuszko notifications@github.com wrote:

Thanks for you feedback.

Just to recap what the problem is:

  • The volumes are not mounted if xfs is selected as desire filesystem in the storageclass. (For the errors please see the first post)

In my prevoius post i tried to mkfs on a disk already used by pod, the disk contained a filesystem already (ext4), since that is what I specified in the storageclass. Understandably the mkfs on the node was not successful. The message Your mke2fs.conf file does not define the xfs filesystem type was little misleading.

What I did now was:

  • created new PersistanceVolumeClaim, which resulted in a fresh disk created by ovirt
  • Haven't used the claim in any pod, so it stayed untouched
  • attached the disk manually to the node
  • created filesystem by executing mkfs -t xfs /dev/vda
  • xfs filesystem was successfully created

As you see, there is no error on the node, however when I change the fsType: to xfs in the storageclass, the flexvolume-driver is failing to create filesystem (on a new claim, new disk ) throwing the same errors quoted in the first post

I'm confused. Can you share the storage class and claim objects?

For the moment storage classes with fsType: ext4 works well. Thank you.

However, if I can be of any help to provide more info in order to troubleshoot and fix the xfs issue , i'm happy to do so. Just don't know what other info I could provide.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/oVirt/ovirt-openshift-extensions/issues/148?email_source=notifications&email_token=ABGBYHA2XRTQKFGKMJYXE2DQUZIBBA5CNFSM4JOYMYGKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEEZRPTA#issuecomment-556996556, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGBYHCHECVWFG3FHVIUCHLQUZIBBANCNFSM4JOYMYGA .

gpastuszko commented 4 years ago
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ovirt
provisioner: ovirt-volume-provisioner
parameters:
  ovirtStorageDomain: "iscsi"
  ovirtDiskThinProvisioning: "true"
  fsType: "xfs"
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: 1g-ovirt-disk
  annotations:
    volume.beta.kubernetes.io/storage-class: ovirt
spec:
  storageClassName: ovirt
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
kind: Pod
apiVersion: v1
metadata:
  name: "test-flex"
spec:
  containers:
  - name: "test-flex"
    image: "php:7-apache"
    volumeMounts:
      - name: "test01"
        mountPath: "/opt"
  volumes:
  - name: "test01"
    persistentVolumeClaim:
      claimName: "1g-ovirt-disk"

The above resulting in the errors quoted in the first post.

Changing the storageclass to:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ovirt
provisioner: ovirt-volume-provisioner
parameters:
  ovirtStorageDomain: "iscsi"
  ovirtDiskThinProvisioning: "true"
  fsType: "ext4"

works as expected.

br0ziliy commented 4 years ago

Hi, I'm having exactly the same problem - using fsType: ext4 in StorageClass works (disk is created in oVirt, attached to worker node in oVirt, mounted to worker node and within a container.

With fsType: xfs in storageClass - the disk is created in oVirt, attached to worker node in oVirt - but never mounted to a worker node with error:

May 15 04:26:43 [redacted] kubelet: W0515 04:26:43.936979   10492 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver, args: [mountdevice /var/lib/kubelet/plugins/kubernetes.io/flexvolume/ovirt/ovirt-flexvolume-driver/mounts/pvc-17f9034e-709c-4493-8c04-1772f86a3cfd  {"kubernetes.io/fsType":"xfs","kubernetes.io/pvOrVolumeName":"pvc-17f9034e-709c-4493-8c04-1772f86a3cfd","kubernetes.io/readwrite":"rw"}], error: exit status 1, output: "exit status 1 mkfs failed with exit status 1"

I'd be more than happy to help troubleshoot this further.

rgolangh commented 4 years ago

What version are you using? I strongly suggest you move to the csi implementation see https://github.com/ovirt/csi-driver

sandrobonazzola commented 2 years ago

This project is no longer maintained, closing.