minio / directpv

Kubernetes CSI driver for Direct Attached Storage :minidisc:
https://directpv.io
GNU Affero General Public License v3.0
599 stars 88 forks source link

Only one of the multiple drivers is "Available" #577

Closed pjy324 closed 2 years ago

pjy324 commented 2 years ago

Only one of the multiple drivers is "Available". And every "kubectl directpv dr ls" the driver changes.

Some drivers were forced to reset. If reset is a problem, is there a way to solve it?

directpv version v3.0.0

node4 lsblk [root@node4 ~]# lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT sda ├─sda1 xfs 72208f45-19ac-4aac-a214-fcf5dd0fbcd6 /boot └─sda2 LVM2_member SHWC9z-cPUg-M7Qt-Bb9H-TGa9-iVyz-qoQaQn ├─rhel-root xfs 30c0764a-0efa-422c-acb3-d06dda5bf30b / └─rhel-swap swap 9703df92-acbe-4144-9641-d00a832111a5 sdb xfs ada40e87-b9e9-4058-84be-64610224fe00 sdc xfs 584ba5c2-49c8-4006-af34-11dd4f18d1f2 sdd xfs 4a815450-035a-48fd-a3c7-3df18b7b6418 sde xfs 40076c90-f575-4f66-be5b-683ed4b00bcd sdf xfs 4c9535ad-6178-41dc-bd31-ba95e0aeb49b sdg xfs e0eb3ada-19ff-4a2b-a706-7a4696a7f8ec sdh xfs 5201ba76-efd6-4988-a487-474cd8860e6d sdi xfs c4f6a2c6-aca5-4c39-a5e1-cf1b8f4b9147 sdj xfs 20c19975-3e94-40ba-87a1-0d33f17c8d33 sdk xfs 78b8efae-6ee0-4ff8-a4ff-54848ff228ac sdl xfs 6e1b1375-3122-4b7e-b0b9-8a108741502c

dr ls [root@node1 ~]# kubectl directpv dr ls | grep node4 /dev/dm-1 16 GiB - linux-swap - node4 - Available /dev/sda2 99 GiB - LVM2_member - node4 - Available /dev/sdg 1.0 TiB - xfs - node4 - Available [root@node1 ~]# kubectl directpv dr ls | grep node4 /dev/dm-1 16 GiB - linux-swap - node4 - Available /dev/sda2 99 GiB - LVM2_member - node4 - Available /dev/sdb 1.0 TiB - xfs - node4 - Available [root@node1 ~]# kubectl directpv dr ls | grep node4 /dev/dm-1 16 GiB - linux-swap - node4 - Available /dev/sda2 99 GiB - LVM2_member - node4 - Available /dev/sdj 1.0 TiB - xfs - node4 - Available [root@node1 ~]# kubectl directpv dr ls | grep node4 /dev/dm-1 16 GiB - linux-swap - node4 - Available /dev/sda2 99 GiB - LVM2_member - node4 - Available /dev/sdd 1.0 TiB - xfs - node4 - Available [root@node1 ~]# kubectl directpv dr ls | grep node4 /dev/dm-1 16 GiB - linux-swap - node4 - Available /dev/sda2 99 GiB - LVM2_member - node4 - Available /dev/sdg 1.0 TiB - xfs - node4 - Available

The directpv-drive-discovery log had the following message: E0509 12:14:29.755287 3696 listener.go:206] "failed to handle an event" err="Operation cannot be fulfilled on directcsidrives.direct.csi.min.io \"9249a578-17bb-fd0c-d146-de6df1247da1\": the object has been modified; please apply your changes to the latest version and try again" change="(MISSING)" I0509 12:14:29.756441 3696 utils.go:64] [/dev/sdk] path mismatch: /dev/sdh -> %!v(MISSING) I0509 12:14:29.771229 3696 utils.go:64] [/dev/sdh] path mismatch: /dev/sdf -> %!v(MISSING) I0509 12:14:29.785239 3696 utils.go:64] [/dev/sdf] path mismatch: /dev/sde -> %!v(MISSING) I0509 12:14:29.799640 3696 utils.go:64] [/dev/sde] path mismatch: /dev/sdd -> %!v(MISSING) I0509 12:14:29.813634 3696 utils.go:64] [/dev/sdd] path mismatch: /dev/sdb -> %!v(MISSING) I0509 12:14:29.825387 3696 utils.go:64] [/dev/sdb] path mismatch: /dev/sdc -> %!v(MISSING) I0509 12:14:29.836060 3696 utils.go:64] [/dev/sdc] path mismatch: /dev/sdg -> %!v(MISSING) I0509 12:14:29.852347 3696 utils.go:64] [/dev/sdg] path mismatch: /dev/sdl -> %!v(MISSING) I0509 12:14:29.869976 3696 utils.go:64] [/dev/sdl] path mismatch: /dev/sdi -> %!v(MISSING) I0509 12:14:29.882209 3696 utils.go:64] [/dev/sdi] path mismatch: /dev/sdj -> %!v(MISSING) I0509 12:14:29.893286 3696 utils.go:124] [dm-1] filesystem mismatch: linux-swap -> swap I0509 12:14:29.896080 3696 utils.go:64] [/dev/sdj] path mismatch: /dev/sdk -> %!v(MISSING) I0509 12:14:29.905356 3696 utils.go:64] [/dev/sdj] path mismatch: /dev/sda -> %!v(MISSING)

Praveenrajmani commented 2 years ago

The following rules will apply for Available drives,

for available drives, the above conditions should be satisfied. kubectl directpv drives ls --all will also show the "unavailable" drives.

I0509 12:14:29.756441 3696 utils.go:64] [/dev/sdk] path mismatch: /dev/sdh -> %!v(MISSING)
I0509 12:14:29.771229 3696 utils.go:64] [/dev/sdh] path mismatch: /dev/sdf -> %!v(MISSING)
I0509 12:14:29.785239 3696 utils.go:64] [/dev/sdf] path mismatch: /dev/sde -> %!v(MISSING)
I0509 12:14:29.799640 3696 utils.go:64] [/dev/sde] path mismatch: /dev/sdd -> %!v(MISSING)
I0509 12:14:29.813634 3696 utils.go:64] [/dev/sdd] path mismatch: /dev/sdb -> %!v(MISSING)
I0509 12:14:29.825387 3696 utils.go:64] [/dev/sdb] path mismatch: /dev/sdc -> %!v(MISSING)
I0509 12:14:29.836060 3696 utils.go:64] [/dev/sdc] path mismatch: /dev/sdg -> %!v(MISSING)
I0509 12:14:29.852347 3696 utils.go:64] [/dev/sdg] path mismatch: /dev/sdl -> %!v(MISSING)
I0509 12:14:29.869976 3696 utils.go:64] [/dev/sdl] path mismatch: /dev/sdi -> %!v(MISSING)
I0509 12:14:29.882209 3696 utils.go:64] [/dev/sdi] path mismatch: /dev/sdj -> %!v(MISSING)
I0509 12:14:29.893286 3696 utils.go:124] [dm-1] filesystem mismatch: linux-swap -> swap
I0509 12:14:29.896080 3696 utils.go:64] [/dev/sdj] path mismatch: /dev/sdk -> %!v(MISSING)
I0509 12:14:29.905356 3696 utils.go:64] [/dev/sdj] path mismatch: /dev/sda -> %!v(MISSING)

^^ These logs are expected when there are any driver resets and the path changes due to this. These aren't error logs. These are "info" logs.

If reset is a problem, is there a way to solve it?

there are no any potential problems due to driver resets. directpv (versions >= 3.0.0) will check the local drives periodically and update the drive CRD objects. so, generally, make sure there aren't too many reset's happening frequently just to avoid minor down-times (the time it takes to sync the drive CRD objects with the probed ones)

Praveenrajmani commented 2 years ago

we need to add relevant docs explaining the drive states. keeping this issue open until we add the docs for this.

pjy324 commented 2 years ago

We have 8-10 drivers per node. sdb,sdc,sde,sdf,sdg,sdh,sdi,sdj,sdk,sdl all meet conditions, but only one is expressed in the "dr ls" list.

For your information, there is a history of uuid changing while forcibly formatting with mkfs.xfs/dev/sdb.

[root@node4 ~]# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda             8:0    0  500G  0 disk
├─sda1          8:1    0 1014M  0 part /boot
└─sda2          8:2    0   99G  0 part
  ├─rhel-root 253:0    0   83G  0 lvm  /
  └─rhel-swap 253:1    0   16G  0 lvm
sdb             8:16   0    1T  0 disk
sdc             8:32   0    1T  0 disk
sdd             8:48   0    1T  0 disk
sde             8:64   0    1T  0 disk
sdf             8:80   0    1T  0 disk
sdg             8:96   0    1T  0 disk
sdh             8:112  0    1T  0 disk
sdi             8:128  0    1T  0 disk
sdj             8:144  0    1T  0 disk
sdk             8:160  0    1T  0 disk
sdl             8:176  0    1T  0 disk
[root@node4 mnt]# blkid
/dev/mapper/rhel-root: UUID="30c0764a-0efa-422c-acb3-d06dda5bf30b" TYPE="xfs"
/dev/sda2: UUID="SHWC9z-cPUg-M7Qt-Bb9H-TGa9-iVyz-qoQaQn" TYPE="LVM2_member"
/dev/sda1: UUID="72208f45-19ac-4aac-a214-fcf5dd0fbcd6" TYPE="xfs"
/dev/sdc: UUID="584ba5c2-49c8-4006-af34-11dd4f18d1f2" TYPE="xfs"
/dev/sdb: UUID="ada40e87-b9e9-4058-84be-64610224fe00" TYPE="xfs"
/dev/sdd: UUID="4a815450-035a-48fd-a3c7-3df18b7b6418" TYPE="xfs"
/dev/sdf: UUID="4c9535ad-6178-41dc-bd31-ba95e0aeb49b" TYPE="xfs"
/dev/sdh: UUID="5201ba76-efd6-4988-a487-474cd8860e6d" TYPE="xfs"
/dev/sde: UUID="40076c90-f575-4f66-be5b-683ed4b00bcd" TYPE="xfs"
/dev/sdg: UUID="e0eb3ada-19ff-4a2b-a706-7a4696a7f8ec" TYPE="xfs"
/dev/sdi: UUID="c4f6a2c6-aca5-4c39-a5e1-cf1b8f4b9147" TYPE="xfs"
/dev/sdk: UUID="78b8efae-6ee0-4ff8-a4ff-54848ff228ac" TYPE="xfs"
/dev/sdj: LABEL="DIRECTCSI" UUID="9249a578-17bb-fd0c-d146-de6df1247da1" TYPE="xfs"
/dev/sdl: UUID="6e1b1375-3122-4b7e-b0b9-8a108741502c" TYPE="xfs"
/dev/mapper/rhel-swap: UUID="9703df92-acbe-4144-9641-d00a832111a5" TYPE="swap"

Every time a command is executed, the driver expressed changes every time


[root@node1 kubespray]# kubectl directpv dr ls -a --nodes 'node4'
 DRIVE      CAPACITY  ALLOCATED  FILESYSTEM   VOLUMES  NODE   ACCESS-TIER  STATUS
 /dev/dm-0  83 GiB    -          xfs          -        node4  -            Unavailable
 /dev/dm-1  16 GiB    -          linux-swap   -        node4  -            Available
 /dev/sda1  1014 MiB  -          xfs          -        node4  -            Unavailable
 /dev/sda2  99 GiB    -          LVM2_member  -        node4  -            Available
 /dev/sdb   1.0 TiB   -          xfs          -        node4  -            Ready*       ; Mountpoint mismatch - Expected /var/lib/direct-csi/mnt/ found ; ; MountpointOptions mismatch - Expected rw found []
[root@node1 kubespray]# kubectl directpv dr ls -a --nodes 'node4'
 DRIVE      CAPACITY  ALLOCATED  FILESYSTEM   VOLUMES  NODE   ACCESS-TIER  STATUS
 /dev/dm-0  83 GiB    -          xfs          -        node4  -            Unavailable
 /dev/dm-1  16 GiB    -          linux-swap   -        node4  -            Available
 /dev/sda1  1014 MiB  -          xfs          -        node4  -            Unavailable
 /dev/sda2  99 GiB    -          LVM2_member  -        node4  -            Available
 /dev/sdj   1.0 TiB   -          xfs          -        node4  -            Ready
[root@node1 kubespray]# kubectl directpv dr ls -a --nodes 'node4'
 DRIVE      CAPACITY  ALLOCATED  FILESYSTEM   VOLUMES  NODE   ACCESS-TIER  STATUS
 /dev/dm-0  83 GiB    -          xfs          -        node4  -            Unavailable
 /dev/dm-1  16 GiB    -          linux-swap   -        node4  -            Available
 /dev/sda1  1014 MiB  -          xfs          -        node4  -            Unavailable
 /dev/sda2  99 GiB    -          LVM2_member  -        node4  -            Available
 /dev/sdf   1.0 TiB   -          xfs          -        node4  -            Ready*       ; Mountpoint mismatch - Expected /var/lib/direct-csi/mnt/ found ; ; MountpointOptions mismatch - Expected rw found []
[root@node1 kubespray]# kubectl directpv dr ls -a --nodes 'node4'
 DRIVE      CAPACITY  ALLOCATED  FILESYSTEM   VOLUMES  NODE   ACCESS-TIER  STATUS
 /dev/dm-0  83 GiB    -          xfs          -        node4  -            Unavailable
 /dev/dm-1  16 GiB    -          linux-swap   -        node4  -            Available
 /dev/sda1  1014 MiB  -          xfs          -        node4  -            Unavailable
 /dev/sda2  99 GiB    -          LVM2_member  -        node4  -            Available
 /dev/sdh   1.0 TiB   -          xfs          -        node4  -            Ready*       ; Mountpoint mismatch - Expected /var/lib/direct-csi/mnt/ found ; ; MountpointOptions mismatch - Expected rw found []
[root@node1 kubespray]# kubectl directpv dr ls -a --nodes 'node4'
 DRIVE      CAPACITY  ALLOCATED  FILESYSTEM   VOLUMES  NODE   ACCESS-TIER  STATUS
 /dev/dm-0  83 GiB    -          xfs          -        node4  -            Unavailable
 /dev/dm-1  16 GiB    -          linux-swap   -        node4  -            Available
 /dev/sda1  1014 MiB  -          xfs          -        node4  -            Unavailable
 /dev/sda2  99 GiB    -          LVM2_member  -        node4  -            Available
 /dev/sdj   1.0 TiB   -          xfs          -        node4  -            Ready
[root@node1 kubespray]# kubectl directpv dr ls -a --nodes 'node4'
 DRIVE      CAPACITY  ALLOCATED  FILESYSTEM   VOLUMES  NODE   ACCESS-TIER  STATUS
 /dev/dm-0  83 GiB    -          xfs          -        node4  -            Unavailable
 /dev/dm-1  16 GiB    -          linux-swap   -        node4  -            Available
 /dev/sda   500 GiB   -          -            -        node4  -            Ready*       ; Mountpoint mismatch - Expected /var/lib/direct-csi/mnt/ found ; ; MountpointOptions mismatch - Expected rw found []
 /dev/sda1  1014 MiB  -          xfs          -        node4  -            Unavailable
 /dev/sda2  99 GiB    -          LVM2_member  -        node4  -            Available
[root@node4 mnt]# pwd
/var/lib/direct-csi/mnt
[root@node4 mnt]# ll
total 0
drwxr-xr-x. 2 root root 6 May 10 08:13 9249a578-17bb-fd0c-d146-de6df1247da1
Praveenrajmani commented 2 years ago

sdb,sdc,sde,sdf,sdg,sdh,sdi,sdj,sdk,sdl all meet conditions, but only one is expressed in the "dr ls" list.

i see, can you paste the output of cat /run/udev/data/b8:16 and cat /run/udev/data/b8:144 @pjy324
also, kubectl directpv drives list --all -o yam > /tmp/drives.yaml and attach the drives.yaml file here

For your information, there is a history of uuid changing while forcibly formatting with mkfs.xfs/dev/sdb.

you SHOULD NOT format the drives managed by directpv. May we know why you did that.

also, can you explain what sort of drives are these and what type on drive controller is being used

pjy324 commented 2 years ago

We reset and reinstalled the k8s cluster. That's why I'm formatting "driver".

Some drivers are linked, but many are not. Node 9 information is as follows.

[root@node1 jy]# kubectl directpv dr ls
 DRIVE      CAPACITY  ALLOCATED  FILESYSTEM   VOLUMES  NODE    ACCESS-TIER  STATUS
 /dev/dm-1  16 GiB    -          linux-swap   -        node10  -            Available
 /dev/sda2  99 GiB    -          LVM2_member  -        node10  -            Available
 /dev/sdb   1.0 TiB   854 GiB    xfs          1        node10  -            InUse
 /dev/sdc   1.0 TiB   854 GiB    xfs          1        node10  -            InUse
 /dev/sdd   1.0 TiB   854 GiB    xfs          1        node10  -            InUse
 /dev/sde   1.0 TiB   854 GiB    xfs          1        node10  -            InUse
 /dev/sdf   1.0 TiB   -          xfs          -        node10  -            Ready
 /dev/sdg   1.0 TiB   854 GiB    xfs          1        node10  -            InUse
 /dev/sdh   1.0 TiB   854 GiB    xfs          1        node10  -            InUse
 /dev/sdi   1.0 TiB   854 GiB    xfs          1        node10  -            InUse
 /dev/sdj   1.0 TiB   854 GiB    xfs          1        node10  -            InUse
 /dev/sdk   1.0 TiB   -          xfs          -        node10  -            Ready
 /dev/sdl   1.0 TiB   -          xfs          -        node10  -            Available
 /dev/dm-1  16 GiB    -          linux-swap   -        node4   -            Available
 /dev/sda2  99 GiB    -          LVM2_member  -        node4   -            Available
 /dev/sdb   1.0 TiB   854 GiB    xfs          1        node4   -            InUse
 /dev/sdc   1.0 TiB   10 GiB     xfs          1        node4   -            InUse
 /dev/sdd   1.0 TiB   854 GiB    xfs          1        node4   -            InUse
 /dev/sde   1.0 TiB   854 GiB    xfs          1        node4   -            InUse
 /dev/sdg   1.0 TiB   854 GiB    xfs          1        node4   -            InUse
 /dev/sdh   1.0 TiB   854 GiB    xfs          1        node4   -            InUse
 /dev/sdi   1.0 TiB   854 GiB    xfs          1        node4   -            InUse
 /dev/sdj   1.0 TiB   854 GiB    xfs          1        node4   -            InUse
 /dev/sdk   1.0 TiB   854 GiB    xfs          1        node4   -            InUse
 /dev/sdl   1.0 TiB   -          xfs          -        node4   -            Available
 /dev/dm-1  16 GiB    -          linux-swap   -        node5   -            Available
 /dev/sda2  99 GiB    -          LVM2_member  -        node5   -            Available
 /dev/sde   1.0 TiB   -          xfs          -        node5   -            Available
 /dev/dm-1  16 GiB    -          linux-swap   -        node6   -            Available
 /dev/sda2  99 GiB    -          LVM2_member  -        node6   -            Available
 /dev/sdc   1.0 TiB   -          xfs          -        node6   -            Available
 /dev/dm-1  16 GiB    -          linux-swap   -        node7   -            Available
 /dev/sda2  99 GiB    -          LVM2_member  -        node7   -            Available
 /dev/sdb   1.0 TiB   854 GiB    xfs          1        node7   -            InUse
 /dev/sdc   1.0 TiB   854 GiB    xfs          1        node7   -            InUse
 /dev/sdd   1.0 TiB   854 GiB    xfs          1        node7   -            InUse
 /dev/sde   1.0 TiB   854 GiB    xfs          1        node7   -            InUse
 /dev/sdf   1.0 TiB   854 GiB    xfs          1        node7   -            InUse
 /dev/sdg   1.0 TiB   854 GiB    xfs          1        node7   -            InUse
 /dev/sdh   1.0 TiB   -          xfs          -        node7   -            Ready
 /dev/sdi   1.0 TiB   -          xfs          -        node7   -            Ready
 /dev/sdj   1.0 TiB   854 GiB    xfs          1        node7   -            InUse
 /dev/sdk   1.0 TiB   854 GiB    xfs          1        node7   -            InUse
 /dev/sdl   1.0 TiB   -          xfs          -        node7   -            Available
 /dev/dm-1  16 GiB    -          linux-swap   -        node8   -            Available
 /dev/sda2  99 GiB    -          LVM2_member  -        node8   -            Available
 /dev/sdb   1.0 TiB   -          xfs          -        node8   -            Available
 /dev/dm-1  16 GiB    -          linux-swap   -        node9   -            Available
 /dev/sda2  99 GiB    -          LVM2_member  -        node9   -            Available
 /dev/sdb   1.0 TiB   -          xfs          -        node9   -            Available
[root@node9 data]# cat /run/udev/data/b8:144
S:disk/by-path/pci-0000:03:00.0-scsi-0:0:10:0
S:disk/by-uuid/0eaf345f-ea68-4a52-97f3-dd687b374bbd
W:351129
I:19028
E:ID_BUS=scsi
E:ID_FS_TYPE=xfs
E:ID_FS_USAGE=filesystem
E:ID_FS_UUID=0eaf345f-ea68-4a52-97f3-dd687b374bbd
E:ID_FS_UUID_ENC=0eaf345f-ea68-4a52-97f3-dd687b374bbd
E:ID_MODEL=Virtual_disk
E:ID_MODEL_ENC=Virtual\x20disk\x20\x20\x20\x20
E:ID_PATH=pci-0000:03:00.0-scsi-0:0:10:0
E:ID_PATH_TAG=pci-0000_03_00_0-scsi-0_0_10_0
E:ID_REVISION=2.0
E:ID_SCSI=1
E:ID_TYPE=disk
E:ID_VENDOR=VMware
E:ID_VENDOR_ENC=VMware\x20\x20
E:MPATH_SBIN_PATH=/sbin
G:systemd

[root@node9 data]# lsblk -a -b -S -s -p -m -f
NAME     HCTL       TYPE VENDOR   MODEL             REV TRAN NAME              SIZE OWNER GROUP MODE       NAME     FSTYPE LABEL UUID                                 MOUNTPOINT
/dev/sdb 0:0:1:0    disk VMware   Virtual disk     2.0       /dev/sdb 1099511627776 root  disk  brw-rw---- /dev/sdb xfs          f64127f5-e075-4bd0-915d-cfb911f2ae90
/dev/sdc 0:0:2:0    disk VMware   Virtual disk     2.0       /dev/sdc 1099511627776 root  disk  brw-rw---- /dev/sdc xfs          9f057c27-081e-48cd-bd46-8abe0d191a47
/dev/sdd 0:0:3:0    disk VMware   Virtual disk     2.0       /dev/sdd 1099511627776 root  disk  brw-rw---- /dev/sdd xfs          dc994bd3-79b7-4bd7-ae5c-a859fb73eb16
/dev/sde 0:0:4:0    disk VMware   Virtual disk     2.0       /dev/sde 1099511627776 root  disk  brw-rw---- /dev/sde xfs          6c506448-e6d3-40ec-b9ce-82e275ca112f
/dev/sdf 0:0:5:0    disk VMware   Virtual disk     2.0       /dev/sdf 1099511627776 root  disk  brw-rw---- /dev/sdf xfs          3ae76542-1ce8-42e5-8536-c4ccc8336b99
/dev/sdg 0:0:6:0    disk VMware   Virtual disk     2.0       /dev/sdg 1099511627776 root  disk  brw-rw---- /dev/sdg xfs          639d6a01-d959-4aa0-a94e-367a70e481e7
/dev/sdh 0:0:8:0    disk VMware   Virtual disk     2.0       /dev/sdh 1099511627776 root  disk  brw-rw---- /dev/sdh xfs          0128dd4d-3c95-410f-aa1e-deb967c5970f
/dev/sdi 0:0:9:0    disk VMware   Virtual disk     2.0       /dev/sdi 1099511627776 root  disk  brw-rw---- /dev/sdi xfs          d0ba38db-938d-46bf-b987-1eae39064d73
/dev/sdj 0:0:10:0   disk VMware   Virtual disk     2.0       /dev/sdj 1099511627776 root  disk  brw-rw---- /dev/sdj xfs          0eaf345f-ea68-4a52-97f3-dd687b374bbd
/dev/sdk 0:0:11:0   disk VMware   Virtual disk     2.0       /dev/sdk 1099511627776 root  disk  brw-rw---- /dev/sdk xfs          c0529cee-404b-45e3-97e0-382c82b2f995
/dev/sdl 0:0:12:0   disk VMware   Virtual disk     2.0       /dev/sdl 1099511627776 root  disk  brw-rw---- /dev/sdl xfs          52608fed-cf8d-4334-ae2e-fb7a2350414f
- apiVersion: direct.csi.min.io/v1beta4
  kind: DirectCSIDrive
  metadata:
    creationTimestamp: "2022-05-10T03:44:10Z"
    generation: 1
    labels:
      direct.csi.min.io/access-tier: Unknown
      direct.csi.min.io/created-by: directcsi-driver
      direct.csi.min.io/node: node9
      direct.csi.min.io/path: dm-0
      direct.csi.min.io/version: v1beta4
    managedFields:
    - apiVersion: direct.csi.min.io/v1beta4
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:labels:
            .: {}
            f:direct.csi.min.io/access-tier: {}
            f:direct.csi.min.io/created-by: {}
            f:direct.csi.min.io/node: {}
            f:direct.csi.min.io/path: {}
            f:direct.csi.min.io/version: {}
        f:spec:
          .: {}
          f:directCSIOwned: {}
        f:status:
          .: {}
          f:accessTier: {}
          f:allocatedCapacity: {}
          f:conditions:
            .: {}
            k:{"type":"Formatted"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
            k:{"type":"Initialized"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
            k:{"type":"Mounted"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
            k:{"type":"Owned"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
            k:{"type":"Ready"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
          f:dmName: {}
          f:dmUUID: {}
          f:driveStatus: {}
          f:filesystem: {}
          f:filesystemUUID: {}
          f:freeCapacity: {}
          f:logicalBlockSize: {}
          f:majorNumber: {}
          f:mountOptions: {}
          f:mountpoint: {}
          f:nodeName: {}
          f:path: {}
          f:physicalBlockSize: {}
          f:rootPartition: {}
          f:topology:
            .: {}
            f:direct.csi.min.io/identity: {}
            f:direct.csi.min.io/node: {}
            f:direct.csi.min.io/rack: {}
            f:direct.csi.min.io/region: {}
            f:direct.csi.min.io/zone: {}
          f:totalCapacity: {}
          f:ueventFSUUID: {}
      manager: directpv
      operation: Update
      time: "2022-05-10T03:44:10Z"
    name: 5ee8ee3f-6655-421a-88b5-9a0dd68f7e06
    resourceVersion: "15477058"
    uid: 62260edd-7ac8-4164-99a8-8a7e637244f1
  spec:
    directCSIOwned: false
  status:
    accessTier: Unknown
    allocatedCapacity: 8692269056
    conditions:
    - lastTransitionTime: "2022-05-10T03:44:10Z"
      message: ""
      reason: NotAdded
      status: "False"
      type: Owned
    - lastTransitionTime: "2022-05-10T03:44:10Z"
      message: /
      reason: NotAdded
      status: "True"
      type: Mounted
    - lastTransitionTime: "2022-05-10T03:44:10Z"
      message: xfs
      reason: NotAdded
      status: "True"
      type: Formatted
    - lastTransitionTime: "2022-05-10T03:44:10Z"
      message: ""
      reason: Initialized
      status: "True"
      type: Initialized
    - lastTransitionTime: "2022-05-10T03:44:10Z"
      message: ""
      reason: Ready
      status: "True"
      type: Ready
    dmName: rhel-root
    dmUUID: LVM-1mmbmrgyflEIlU2BMeTgd09PVzXIVM3t6Vs3ypbKh05qsnr7ZzsH0YNdrrl2Giuq
    driveStatus: Unavailable
    filesystem: xfs
    filesystemUUID: 30c0764a-0efa-422c-acb3-d06dda5bf30b
    freeCapacity: 80440885248
    logicalBlockSize: 512
    majorNumber: 253
    mountOptions:
    - relatime
    - rw
    mountpoint: /
    nodeName: node9
    path: /dev/dm-0
    physicalBlockSize: 512
    rootPartition: dm-0
    topology:
      direct.csi.min.io/identity: direct-csi-min-io
      direct.csi.min.io/node: node9
      direct.csi.min.io/rack: default
      direct.csi.min.io/region: default
      direct.csi.min.io/zone: default
    totalCapacity: 89133154304
    ueventFSUUID: 30c0764a-0efa-422c-acb3-d06dda5bf30b
- apiVersion: direct.csi.min.io/v1beta4
  kind: DirectCSIDrive
  metadata:
    creationTimestamp: "2022-05-10T03:44:10Z"
    generation: 1
    labels:
      direct.csi.min.io/access-tier: Unknown
      direct.csi.min.io/created-by: directcsi-driver
      direct.csi.min.io/node: node9
      direct.csi.min.io/path: dm-1
      direct.csi.min.io/version: v1beta4
    managedFields:
    - apiVersion: direct.csi.min.io/v1beta4
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:labels:
            .: {}
            f:direct.csi.min.io/access-tier: {}
            f:direct.csi.min.io/created-by: {}
            f:direct.csi.min.io/node: {}
            f:direct.csi.min.io/path: {}
            f:direct.csi.min.io/version: {}
        f:spec:
          .: {}
          f:directCSIOwned: {}
        f:status:
          .: {}
          f:accessTier: {}
          f:allocatedCapacity: {}
          f:conditions:
            .: {}
            k:{"type":"Formatted"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
            k:{"type":"Initialized"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
            k:{"type":"Mounted"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
            k:{"type":"Owned"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
            k:{"type":"Ready"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
          f:dmName: {}
          f:dmUUID: {}
          f:driveStatus: {}
          f:filesystem: {}
          f:logicalBlockSize: {}
          f:majorNumber: {}
          f:minorNumber: {}
          f:nodeName: {}
          f:path: {}
          f:physicalBlockSize: {}
          f:rootPartition: {}
          f:topology:
            .: {}
            f:direct.csi.min.io/identity: {}
            f:direct.csi.min.io/node: {}
            f:direct.csi.min.io/rack: {}
            f:direct.csi.min.io/region: {}
            f:direct.csi.min.io/zone: {}
          f:totalCapacity: {}
          f:ueventFSUUID: {}
      manager: directpv
      operation: Update
      time: "2022-05-10T03:44:10Z"
    name: 2491bf0e-5e03-8ce7-0327-97aebafdc289
    resourceVersion: "15476999"
    uid: 2598ab25-e678-483d-b3ff-5c4e23735d66
  spec:
    directCSIOwned: false
  status:
    accessTier: Unknown
    allocatedCapacity: 17175674880
    conditions:
    - lastTransitionTime: "2022-05-10T03:44:10Z"
      message: ""
      reason: NotAdded
      status: "False"
      type: Owned
    - lastTransitionTime: "2022-05-10T03:44:10Z"
      message: ""
      reason: NotAdded
      status: "False"
      type: Mounted
    - lastTransitionTime: "2022-05-10T03:44:10Z"
      message: xfs
      reason: NotAdded
      status: "True"
      type: Formatted
    - lastTransitionTime: "2022-05-10T03:44:10Z"
      message: ""
      reason: Initialized
      status: "True"
      type: Initialized
    - lastTransitionTime: "2022-05-10T03:44:10Z"
      message: ""
      reason: Ready
      status: "True"
      type: Ready
    dmName: rhel-swap
    dmUUID: LVM-1mmbmrgyflEIlU2BMeTgd09PVzXIVM3tNa1S6S2SCe0YHBf281SJcsqglaL02qVM
    driveStatus: Available
    filesystem: linux-swap
    logicalBlockSize: 512
    majorNumber: 253
    minorNumber: 1
    nodeName: node9
    path: /dev/dm-1
    physicalBlockSize: 4096
    rootPartition: dm-1
    topology:
      direct.csi.min.io/identity: direct-csi-min-io
      direct.csi.min.io/node: node9
      direct.csi.min.io/rack: default
      direct.csi.min.io/region: default
      direct.csi.min.io/zone: default
    totalCapacity: 17175674880
    ueventFSUUID: 9703df92-acbe-4144-9641-d00a832111a5
- apiVersion: direct.csi.min.io/v1beta4
  kind: DirectCSIDrive
  metadata:
    creationTimestamp: "2022-05-10T03:44:10Z"
    generation: 1
    labels:
      direct.csi.min.io/access-tier: Unknown
      direct.csi.min.io/created-by: directcsi-driver
      direct.csi.min.io/node: node9
      direct.csi.min.io/path: sda1
      direct.csi.min.io/version: v1beta4
    managedFields:
    - apiVersion: direct.csi.min.io/v1beta4
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:labels:
            .: {}
            f:direct.csi.min.io/access-tier: {}
            f:direct.csi.min.io/created-by: {}
            f:direct.csi.min.io/node: {}
            f:direct.csi.min.io/path: {}
            f:direct.csi.min.io/version: {}
        f:spec:
          .: {}
          f:directCSIOwned: {}
        f:status:
          .: {}
          f:accessTier: {}
          f:allocatedCapacity: {}
          f:conditions:
            .: {}
            k:{"type":"Formatted"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
            k:{"type":"Initialized"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
            k:{"type":"Mounted"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
            k:{"type":"Owned"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
            k:{"type":"Ready"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
          f:driveStatus: {}
          f:filesystem: {}
          f:filesystemUUID: {}
          f:freeCapacity: {}
          f:logicalBlockSize: {}
          f:majorNumber: {}
          f:minorNumber: {}
          f:modelNumber: {}
          f:mountOptions: {}
          f:mountpoint: {}
          f:nodeName: {}
          f:partTableType: {}
          f:partitionNum: {}
          f:path: {}
          f:pciPath: {}
          f:physicalBlockSize: {}
          f:rootPartition: {}
          f:topology:
            .: {}
            f:direct.csi.min.io/identity: {}
            f:direct.csi.min.io/node: {}
            f:direct.csi.min.io/rack: {}
            f:direct.csi.min.io/region: {}
            f:direct.csi.min.io/zone: {}
          f:totalCapacity: {}
          f:ueventFSUUID: {}
          f:vendor: {}
      manager: directpv
      operation: Update
      time: "2022-05-10T03:44:10Z"
    name: 097bc822-6181-1c59-55f7-49aa3a84ec47
    resourceVersion: "15476988"
    uid: 75d21a49-e68f-4afd-b4f7-0ea32d2f54bd
  spec:
    directCSIOwned: false
  status:
    accessTier: Unknown
    allocatedCapacity: 245735424
    conditions:
    - lastTransitionTime: "2022-05-10T03:44:10Z"
      message: ""
      reason: NotAdded
      status: "False"
      type: Owned
    - lastTransitionTime: "2022-05-10T03:44:10Z"
      message: /boot
      reason: NotAdded
      status: "True"
      type: Mounted
    - lastTransitionTime: "2022-05-10T03:44:10Z"
      message: xfs
      reason: NotAdded
      status: "True"
      type: Formatted
    - lastTransitionTime: "2022-05-10T03:44:10Z"
      message: ""
      reason: Initialized
      status: "True"
      type: Initialized
    - lastTransitionTime: "2022-05-10T03:44:10Z"
      message: ""
      reason: Ready
      status: "True"
      type: Ready
    driveStatus: Unavailable
    filesystem: xfs
    filesystemUUID: 72208f45-19ac-4aac-a214-fcf5dd0fbcd6
    freeCapacity: 817520640
    logicalBlockSize: 512
    majorNumber: 8
    minorNumber: 1
    modelNumber: Virtual_disk
    mountOptions:
    - relatime
    - rw
    mountpoint: /boot
    nodeName: node9
    partTableType: dos
    partitionNum: 1
    path: /dev/sda1
    pciPath: pci-0000:03:00.0-scsi-0:0:0:0
    physicalBlockSize: 512
    rootPartition: sda1
    topology:
      direct.csi.min.io/identity: direct-csi-min-io
      direct.csi.min.io/node: node9
      direct.csi.min.io/rack: default
      direct.csi.min.io/region: default
      direct.csi.min.io/zone: default
    totalCapacity: 1063256064
    ueventFSUUID: 72208f45-19ac-4aac-a214-fcf5dd0fbcd6
    vendor: VMware
- apiVersion: direct.csi.min.io/v1beta4
  kind: DirectCSIDrive
  metadata:
    creationTimestamp: "2022-05-10T03:44:10Z"
    generation: 1
    labels:
      direct.csi.min.io/access-tier: Unknown
      direct.csi.min.io/created-by: directcsi-driver
      direct.csi.min.io/node: node9
      direct.csi.min.io/path: sda2
      direct.csi.min.io/version: v1beta4
    managedFields:
    - apiVersion: direct.csi.min.io/v1beta4
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:labels:
            .: {}
            f:direct.csi.min.io/access-tier: {}
            f:direct.csi.min.io/created-by: {}
            f:direct.csi.min.io/node: {}
            f:direct.csi.min.io/path: {}
            f:direct.csi.min.io/version: {}
        f:spec:
          .: {}
          f:directCSIOwned: {}
        f:status:
          .: {}
          f:accessTier: {}
          f:allocatedCapacity: {}
          f:conditions:
            .: {}
            k:{"type":"Formatted"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
            k:{"type":"Initialized"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
            k:{"type":"Mounted"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
            k:{"type":"Owned"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
            k:{"type":"Ready"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
          f:driveStatus: {}
          f:filesystem: {}
          f:logicalBlockSize: {}
          f:majorNumber: {}
          f:minorNumber: {}
          f:modelNumber: {}
          f:nodeName: {}
          f:partTableType: {}
          f:partitionNum: {}
          f:path: {}
          f:pciPath: {}
          f:physicalBlockSize: {}
          f:rootPartition: {}
          f:topology:
            .: {}
            f:direct.csi.min.io/identity: {}
            f:direct.csi.min.io/node: {}
            f:direct.csi.min.io/rack: {}
            f:direct.csi.min.io/region: {}
            f:direct.csi.min.io/zone: {}
          f:totalCapacity: {}
          f:ueventFSUUID: {}
          f:vendor: {}
      manager: directpv
      operation: Update
      time: "2022-05-10T03:44:10Z"
    name: f2fdf31d-0285-3447-81b2-3bfca992ef01
    resourceVersion: "15476990"
    uid: d8ff6823-7b14-47dc-947d-3402d65b37ba
  spec:
    directCSIOwned: false
  status:
    accessTier: Unknown
    allocatedCapacity: 106309877760
    conditions:
    - lastTransitionTime: "2022-05-10T03:44:10Z"
      message: ""
      reason: NotAdded
      status: "False"
      type: Owned
    - lastTransitionTime: "2022-05-10T03:44:10Z"
      message: ""
      reason: NotAdded
      status: "False"
      type: Mounted
    - lastTransitionTime: "2022-05-10T03:44:10Z"
      message: xfs
      reason: NotAdded
      status: "True"
      type: Formatted
    - lastTransitionTime: "2022-05-10T03:44:10Z"
      message: ""
      reason: Initialized
      status: "True"
      type: Initialized
    - lastTransitionTime: "2022-05-10T03:44:10Z"
      message: ""
      reason: Ready
      status: "True"
      type: Ready
    driveStatus: Available
    filesystem: LVM2_member
    logicalBlockSize: 512
    majorNumber: 8
    minorNumber: 2
    modelNumber: Virtual_disk
    nodeName: node9
    partTableType: dos
    partitionNum: 2
    path: /dev/sda2
    pciPath: pci-0000:03:00.0-scsi-0:0:0:0
    physicalBlockSize: 4096
    rootPartition: sda2
    topology:
      direct.csi.min.io/identity: direct-csi-min-io
      direct.csi.min.io/node: node9
      direct.csi.min.io/rack: default
      direct.csi.min.io/region: default
      direct.csi.min.io/zone: default
    totalCapacity: 106309877760
    ueventFSUUID: SHWC9z-cPUg-M7Qt-Bb9H-TGa9-iVyz-qoQaQn
    vendor: VMware
- apiVersion: direct.csi.min.io/v1beta4
  kind: DirectCSIDrive
  metadata:
    creationTimestamp: "2022-05-10T03:44:10Z"
    generation: 486575
    labels:
      direct.csi.min.io/access-tier: Unknown
      direct.csi.min.io/created-by: directcsi-driver
      direct.csi.min.io/node: node9
      direct.csi.min.io/path: sde
      direct.csi.min.io/version: v1beta4
    managedFields:
    - apiVersion: direct.csi.min.io/v1beta4
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:labels:
            .: {}
            f:direct.csi.min.io/access-tier: {}
            f:direct.csi.min.io/created-by: {}
            f:direct.csi.min.io/node: {}
            f:direct.csi.min.io/path: {}
            f:direct.csi.min.io/version: {}
        f:spec:
          .: {}
          f:directCSIOwned: {}
        f:status:
          .: {}
          f:accessTier: {}
          f:allocatedCapacity: {}
          f:conditions:
            .: {}
            k:{"type":"Formatted"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
            k:{"type":"Initialized"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
            k:{"type":"Mounted"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
            k:{"type":"Owned"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
            k:{"type":"Ready"}:
              .: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
          f:driveStatus: {}
          f:filesystem: {}
          f:filesystemUUID: {}
          f:freeCapacity: {}
          f:logicalBlockSize: {}
          f:majorNumber: {}
          f:minorNumber: {}
          f:modelNumber: {}
          f:nodeName: {}
          f:path: {}
          f:pciPath: {}
          f:physicalBlockSize: {}
          f:rootPartition: {}
          f:topology:
            .: {}
            f:direct.csi.min.io/identity: {}
            f:direct.csi.min.io/node: {}
            f:direct.csi.min.io/rack: {}
            f:direct.csi.min.io/region: {}
            f:direct.csi.min.io/zone: {}
          f:totalCapacity: {}
          f:ueventFSUUID: {}
          f:vendor: {}
      manager: directpv
      operation: Update
      time: "2022-05-10T05:29:47Z"
    name: 12f5dfca-cb38-d611-3a13-e6b57cf91b3f
    resourceVersion: "17458958"
    uid: ea7e08ba-cae4-405f-8a3c-ad8388d2613e
  spec:
    directCSIOwned: false
  status:
    accessTier: Unknown
    allocatedCapacity: 536969216
    conditions:
    - lastTransitionTime: "2022-05-10T03:44:10Z"
      message: ""
      reason: NotAdded
      status: "False"
      type: Owned
    - lastTransitionTime: "2022-05-10T03:44:10Z"
      message: ""
      reason: NotAdded
      status: "False"
      type: Mounted
    - lastTransitionTime: "2022-05-10T05:29:47Z"
      message: ""
      reason: Added
      status: "True"
      type: Formatted
    - lastTransitionTime: "2022-05-10T03:44:10Z"
      message: ""
      reason: Initialized
      status: "True"
      type: Initialized
    - lastTransitionTime: "2022-05-10T05:29:47Z"
      message: ""
      reason: Ready
      status: "True"
      type: Ready
    driveStatus: Available
    filesystem: xfs
    filesystemUUID: 6c506448-e6d3-40ec-b9ce-82e275ca112f
    freeCapacity: 1098974658560
    logicalBlockSize: 512
    majorNumber: 8
    minorNumber: 64
    modelNumber: Virtual_disk
    nodeName: node9
    path: /dev/sde
    pciPath: pci-0000:03:00.0-scsi-0:0:4:0
    physicalBlockSize: 4096
    rootPartition: sde
    topology:
      direct.csi.min.io/identity: direct-csi-min-io
      direct.csi.min.io/node: node9
      direct.csi.min.io/rack: default
      direct.csi.min.io/region: default
      direct.csi.min.io/zone: default
    totalCapacity: 1099511627776
    ueventFSUUID: 6c506448-e6d3-40ec-b9ce-82e275ca112f
    vendor: VMware
Praveenrajmani commented 2 years ago

I see the drives from node-4 showing up here in the output,

 /dev/sdb   1.0 TiB   854 GiB    xfs          1        node4   -            InUse
 /dev/sdc   1.0 TiB   10 GiB     xfs          1        node4   -            InUse
 /dev/sdd   1.0 TiB   854 GiB    xfs          1        node4   -            InUse
 /dev/sde   1.0 TiB   854 GiB    xfs          1        node4   -            InUse
 /dev/sdg   1.0 TiB   854 GiB    xfs          1        node4   -            InUse
 /dev/sdh   1.0 TiB   854 GiB    xfs          1        node4   -            InUse
 /dev/sdi   1.0 TiB   854 GiB    xfs          1        node4   -            InUse
 /dev/sdj   1.0 TiB   854 GiB    xfs          1        node4   -            InUse
 /dev/sdk   1.0 TiB   854 GiB    xfs          1        node4   -            InUse
 /dev/sdl   1.0 TiB   -          xfs          -        node4   -            Available

what is the issue here? @pjy324

Praveenrajmani commented 2 years ago

[root@node9 data]# cat /run/udev/data/b8:144 S:disk/by-path/pci-0000:03:00.0-scsi-0:0:10:0 S:disk/by-uuid/0eaf345f-ea68-4a52-97f3-dd687b374bbd W:351129 I:19028 E:ID_BUS=scsi E:ID_FS_TYPE=xfs E:ID_FS_USAGE=filesystem E:ID_FS_UUID=0eaf345f-ea68-4a52-97f3-dd687b374bbd E:ID_FS_UUID_ENC=0eaf345f-ea68-4a52-97f3-dd687b374bbd E:ID_MODEL=Virtual_disk E:ID_MODEL_ENC=Virtual\x20disk\x20\x20\x20\x20 E:ID_PATH=pci-0000:03:00.0-scsi-0:0:10:0 E:ID_PATH_TAG=pci-0000_03_00_0-scsi-0_0_10_0 E:ID_REVISION=2.0 E:ID_SCSI=1 E:ID_TYPE=disk E:ID_VENDOR=VMware E:ID_VENDOR_ENC=VMware\x20\x20 E:MPATH_SBIN_PATH=/sbin G:systemd

Also, these virtual disks do not have any hardware persistent properties like, serial number, wwid etc. DirectPV will match the drives by its persistent properties. So, generally, directpv do not expect virtual disks.

you can also configure your device driver to geenrate WWIDs, SerialNumber for the drives.

pjy324 commented 2 years ago

node9 has 11 drivers. Only one out of 11 is being held, and the name changes every time a command is executed. I want to use all 11 drivers

[root@node9 ~]# lsblk -f
NAME          FSTYPE      LABEL UUID                                   MOUNTPOINT
sda
├─sda1        xfs               72208f45-19ac-4aac-a214-fcf5dd0fbcd6   /boot
└─sda2        LVM2_member       SHWC9z-cPUg-M7Qt-Bb9H-TGa9-iVyz-qoQaQn
  ├─rhel-root xfs               30c0764a-0efa-422c-acb3-d06dda5bf30b   /
  └─rhel-swap swap              9703df92-acbe-4144-9641-d00a832111a5
sdb           xfs               f64127f5-e075-4bd0-915d-cfb911f2ae90
sdc           xfs               9f057c27-081e-48cd-bd46-8abe0d191a47
sdd           xfs               dc994bd3-79b7-4bd7-ae5c-a859fb73eb16
sde           xfs               6c506448-e6d3-40ec-b9ce-82e275ca112f
sdf           xfs               3ae76542-1ce8-42e5-8536-c4ccc8336b99
sdg           xfs               639d6a01-d959-4aa0-a94e-367a70e481e7
sdh           xfs               0128dd4d-3c95-410f-aa1e-deb967c5970f
sdi           xfs               d0ba38db-938d-46bf-b987-1eae39064d73
sdj           xfs               0eaf345f-ea68-4a52-97f3-dd687b374bbd
sdk           xfs               c0529cee-404b-45e3-97e0-382c82b2f995
sdl           xfs               52608fed-cf8d-4334-ae2e-fb7a2350414f

[root@node1 jy]# kubectl directpv drives ls --nodes node9
 DRIVE      CAPACITY  ALLOCATED  FILESYSTEM   VOLUMES  NODE   ACCESS-TIER  STATUS
 /dev/dm-1  16 GiB    -          linux-swap   -        node9  -            Available
 /dev/sda2  99 GiB    -          LVM2_member  -        node9  -            Available
 /dev/sdg   1.0 TiB   -          xfs          -        node9  -            Available
[root@node1 jy]# kubectl directpv drives ls --nodes node9
 DRIVE      CAPACITY  ALLOCATED  FILESYSTEM   VOLUMES  NODE   ACCESS-TIER  STATUS
 /dev/dm-1  16 GiB    -          linux-swap   -        node9  -            Available
 /dev/sda2  99 GiB    -          LVM2_member  -        node9  -            Available
[root@node1 jy]# kubectl directpv drives ls --nodes node9
 DRIVE      CAPACITY  ALLOCATED  FILESYSTEM   VOLUMES  NODE   ACCESS-TIER  STATUS
 /dev/dm-1  16 GiB    -          linux-swap   -        node9  -            Available
 /dev/sda2  99 GiB    -          LVM2_member  -        node9  -            Available
 /dev/sdi   1.0 TiB   -          xfs          -        node9  -            Available
[root@node1 jy]# kubectl directpv drives ls --nodes node9
 DRIVE      CAPACITY  ALLOCATED  FILESYSTEM   VOLUMES  NODE   ACCESS-TIER  STATUS
 /dev/dm-1  16 GiB    -          linux-swap   -        node9  -            Available
 /dev/sda2  99 GiB    -          LVM2_member  -        node9  -            Available
 /dev/sdg   1.0 TiB   -          xfs          -        node9  -            Available
[root@node1 jy]# kubectl directpv drives ls --nodes node9
 DRIVE      CAPACITY  ALLOCATED  FILESYSTEM   VOLUMES  NODE   ACCESS-TIER  STATUS
 /dev/dm-1  16 GiB    -          linux-swap   -        node9  -            Available
 /dev/sda2  99 GiB    -          LVM2_member  -        node9  -            Available
 /dev/sdg   1.0 TiB   -          xfs          -        node9  -            Available
Praveenrajmani commented 2 years ago

node9 has 11 drivers. Only one out of 11 is being held, and the name changes every time a command is executed. I want to use all 11 drivers

[root@node9 ~]# lsblk -f
NAME          FSTYPE      LABEL UUID                                   MOUNTPOINT
sda
├─sda1        xfs               72208f45-19ac-4aac-a214-fcf5dd0fbcd6   /boot
└─sda2        LVM2_member       SHWC9z-cPUg-M7Qt-Bb9H-TGa9-iVyz-qoQaQn
  ├─rhel-root xfs               30c0764a-0efa-422c-acb3-d06dda5bf30b   /
  └─rhel-swap swap              9703df92-acbe-4144-9641-d00a832111a5
sdb           xfs               f64127f5-e075-4bd0-915d-cfb911f2ae90
sdc           xfs               9f057c27-081e-48cd-bd46-8abe0d191a47
sdd           xfs               dc994bd3-79b7-4bd7-ae5c-a859fb73eb16
sde           xfs               6c506448-e6d3-40ec-b9ce-82e275ca112f
sdf           xfs               3ae76542-1ce8-42e5-8536-c4ccc8336b99
sdg           xfs               639d6a01-d959-4aa0-a94e-367a70e481e7
sdh           xfs               0128dd4d-3c95-410f-aa1e-deb967c5970f
sdi           xfs               d0ba38db-938d-46bf-b987-1eae39064d73
sdj           xfs               0eaf345f-ea68-4a52-97f3-dd687b374bbd
sdk           xfs               c0529cee-404b-45e3-97e0-382c82b2f995
sdl           xfs               52608fed-cf8d-4334-ae2e-fb7a2350414f

[root@node1 jy]# kubectl directpv drives ls --nodes node9
 DRIVE      CAPACITY  ALLOCATED  FILESYSTEM   VOLUMES  NODE   ACCESS-TIER  STATUS
 /dev/dm-1  16 GiB    -          linux-swap   -        node9  -            Available
 /dev/sda2  99 GiB    -          LVM2_member  -        node9  -            Available
 /dev/sdg   1.0 TiB   -          xfs          -        node9  -            Available
[root@node1 jy]# kubectl directpv drives ls --nodes node9
 DRIVE      CAPACITY  ALLOCATED  FILESYSTEM   VOLUMES  NODE   ACCESS-TIER  STATUS
 /dev/dm-1  16 GiB    -          linux-swap   -        node9  -            Available
 /dev/sda2  99 GiB    -          LVM2_member  -        node9  -            Available
[root@node1 jy]# kubectl directpv drives ls --nodes node9
 DRIVE      CAPACITY  ALLOCATED  FILESYSTEM   VOLUMES  NODE   ACCESS-TIER  STATUS
 /dev/dm-1  16 GiB    -          linux-swap   -        node9  -            Available
 /dev/sda2  99 GiB    -          LVM2_member  -        node9  -            Available
 /dev/sdi   1.0 TiB   -          xfs          -        node9  -            Available
[root@node1 jy]# kubectl directpv drives ls --nodes node9
 DRIVE      CAPACITY  ALLOCATED  FILESYSTEM   VOLUMES  NODE   ACCESS-TIER  STATUS
 /dev/dm-1  16 GiB    -          linux-swap   -        node9  -            Available
 /dev/sda2  99 GiB    -          LVM2_member  -        node9  -            Available
 /dev/sdg   1.0 TiB   -          xfs          -        node9  -            Available
[root@node1 jy]# kubectl directpv drives ls --nodes node9
 DRIVE      CAPACITY  ALLOCATED  FILESYSTEM   VOLUMES  NODE   ACCESS-TIER  STATUS
 /dev/dm-1  16 GiB    -          linux-swap   -        node9  -            Available
 /dev/sda2  99 GiB    -          LVM2_member  -        node9  -            Available
 /dev/sdg   1.0 TiB   -          xfs          -        node9  -            Available

yes @pjy324 , the reason is because of this https://github.com/minio/directpv/issues/577#issuecomment-1121962727. Once you configure these virtual disks to have WWIDs/SerialNumber you won't see this behavior as directpv would correctly match the drives based on its immutable (hardware) persistent properties.

Right now with these virtual disks, directpv can't match the correct drive as it cant see any persistent properties.

pjy324 commented 2 years ago

Drives identified by the 'kubectl directpv drives ls --nodes node9' command are constantly changing, making it difficult to execute the format command. Also, the following error occurs even if it is executed

[root@node1 jy]# kubectl directpv drives format --nodes 'node9' --drives '/dev/sdi' --force
[root@node1 jy]# kubectl directpv drives format --nodes 'node9' --drives '/dev/sdi' --force
[root@node1 jy]# kubectl directpv drives format --nodes 'node9' --drives '/dev/sdi' --force
[root@node1 jy]# kubectl directpv drives format --nodes 'node9' --drives '/dev/sdi' --force
[root@node1 jy]# kubectl directpv drives format --nodes 'node9' --drives '/dev/sdi' --force
[root@node1 jy]# kubectl directpv drives format --nodes 'node9' --drives '/dev/sdi' --force
[root@node1 jy]# kubectl directpv drives format --nodes 'node9' --drives '/dev/sdi' --force
[root@node1 jy]# kubectl directpv drives format --nodes 'node9' --drives '/dev/sdi' --force
[root@node1 jy]# kubectl directpv drives format --nodes 'node9' --drives '/dev/sdi' --force
[root@node1 jy]# kubectl directpv drives format --nodes 'node9' --drives '/dev/sdi' --force
[root@node1 jy]# kubectl directpv drives format --nodes 'node9' --drives '/dev/sdi' --force
Error: Operation cannot be fulfilled on directcsidrives.direct.csi.min.io "12f5dfca-cb38-d611-3a13-e6b57cf91b3f": the object has been modified; please apply your changes to the latest version and try again
ERROR Operation cannot be fulfilled on directcsidrives.direct.csi.min.io "12f5dfca-cb38-d611-3a13-e6b57cf91b3f": the object has been modified; please apply your changes to the latest version and try again
Praveenrajmani commented 2 years ago

you need to configure vmware to assign WWIDs and SerialNumbers for the disks @pjy324

This doc - https://communities.vmware.com/t5/vSphere-Guest-SDK-Discussions/WWN-ID-of-a-VMFS-virtual-disk/td-p/1330635 might help

https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-3F14F20C-2865-4345-B20C-D28C179A2D6A.html

Praveenrajmani commented 2 years ago

Have also opened another issue related to this - https://github.com/minio/directpv/issues/580

pjy324 commented 2 years ago

@Praveenrajmani

How can I proceed with the second solution? We tried to remove node9(k8s worker) and then add it again. Is there a guide to how to proceed?

possible solution :

  1. Fixing the VM to assign unique properties like WWID, SerialNumber to the virtual disks
  2. Provide support in directpv - for fresh installations (no remote drives present), skip matching and create drive objects for the initial discovery before starting the uevent listener and syncing.
Praveenrajmani commented 2 years ago

How can I proceed with the second solution?

the second solution would require some changes in the code to support this scenario. Need some internal discussion with the team to do this.

We tried to remove node9(k8s worker) and then add it again.

this won't help, as suggested you need configure the VM to assign WWIDs and SerialNumbers to the virtual drives.

Is there a guide to how to proceed?

you need to explore VMware virtualbox to assign WWIDs and SerialNumbers to the virtual disks