sergelogvinov / proxmox-csi-plugin

Proxmox CSI Plugin
Apache License 2.0
336 stars 28 forks source link

MountVolume.MountDevice failed for volume The file /dev/disk/by-id/wwn-[hex] does not exist and no size was specified #106

Closed trunet closed 2 months ago

trunet commented 1 year ago

Bug Report

I'm trying to volume_mount on a pod, and it's giving this error:

  Warning  FailedMount             1s (x7 over 33s)  kubelet                  MountVolume.MountDevice failed for volume "pvc-c0f894e9-576b-404d-a270-86d20767d37d" : rpc error: code = Internal desc = format of disk "/dev/disk/by-id/wwn-0x5056432d49443033" failed: type:("ext4") target:("/var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/c58612b0a78f02c0fca6aa37f64ad7c7a45d6155add0c6dff22a898d428fc0a9/globalmount") options:("noatime,defaults") errcode:(exit status 1) output:(mke2fs 1.46.2 (28-Feb-2021)
The file /dev/disk/by-id/wwn-0x5056432d49443033 does not exist and no size was specified.

Description

What's strange is that on proxmox the wwn is one: scsi3: local-zfs:vm-9999-pvc-c0f894e9-576b-404d-a270-86d20767d37d,backup=0,iothread=1,size=1G,wwn=0x5056432d49443033

But if I use talosctl to list /dev, wwn doesn't match. This volume disk is mounting on the vm as sde, I saw that because before starting the pod sde wasn't there:

❯ talosctl -n [REDACTED] list -l /dev | grep sd
[REDACTED]   Drw-------    0     0     0         Oct 24 01:59:54   sda
[REDACTED]   Drw-------    0     0     0         Oct 24 02:08:42   sdb
[REDACTED]   Drw-------    0     0     0         Oct 24 02:00:07   sdc
[REDACTED]   Drw-------    0     0     0         Oct 25 00:19:28   sde

❯ talosctl -n [REDACTED] list -l /dev/disk/by-id | grep sd
[REDACTED]   Lrwxrwxrwx   0     0     9         Oct 24 02:00:07   scsi-33000000024585f0a -> ../../sdc
[REDACTED]   Lrwxrwxrwx   0     0     9         Oct 25 00:19:28   scsi-330000000c5fb5a6d -> ../../sde <- not matching id
[REDACTED]   Lrwxrwxrwx   0     0     9         Oct 24 01:59:54   scsi-35056432d49443031 -> ../../sda
[REDACTED]   Lrwxrwxrwx   0     0     9         Oct 24 02:08:42   scsi-35056432d49443032 -> ../../sdb
[REDACTED]   Lrwxrwxrwx   0     0     9         Oct 24 02:00:07   wwn-0x3000000024585f0a -> ../../sdc
[REDACTED]   Lrwxrwxrwx   0     0     9         Oct 25 00:19:28   wwn-0x30000000c5fb5a6d -> ../../sde <- not matching id
[REDACTED]   Lrwxrwxrwx   0     0     9         Oct 24 01:59:54   wwn-0x5056432d49443031 -> ../../sda
[REDACTED]   Lrwxrwxrwx   0     0     9         Oct 24 02:08:42   wwn-0x5056432d49443032 -> ../../sdb

As you can see, wwn doesn't match for some reason, should end with 3033.

Logs

Environment

trunet commented 1 year ago

If I reboot talos VM, it works fine again.

trunet commented 1 year ago

I saw you released 0.4.0, I just upgraded. I started getting lots of this on my console:

[REDACTED]: daemon:     err: [2023-10-24T23:11:49.504815104Z]: udevd[41877]: timeout 'scsi_id --export --whitelisted -d /dev/sde'
[REDACTED]: daemon: warning: [2023-10-24T23:11:49.508018104Z]: udevd[41877]: slow: 'scsi_id --export --whitelisted -d /dev/sde' [41880]
[REDACTED]: daemon:     err: [2023-10-24T23:11:50.512340104Z]: udevd[41877]: timeout: killing 'scsi_id --export --whitelisted -d /dev/sde' [41880]
[REDACTED]: daemon: warning: [2023-10-24T23:11:50.516132104Z]: udevd[41877]: slow: 'scsi_id --export --whitelisted -d /dev/sde' [41880]
[REDACTED]: daemon:     err: [2023-10-24T23:11:51.520329104Z]: udevd[41877]: timeout: killing 'scsi_id --export --whitelisted -d /dev/sde' [41880]
[REDACTED]: daemon: warning: [2023-10-24T23:11:51.524067104Z]: udevd[41877]: slow: 'scsi_id --export --whitelisted -d /dev/sde' [41880]
sergelogvinov commented 1 year ago

Hello, i think you have performance issue with zfs pool. It requires a lot fo free memory depend on zpool size. (2GB or more free mem)

udevd[41877]: timeout 'scsi_id --export --whitelisted -d /dev/sde'

this record telling you that you have IO issue with disk. It cannot receive block disk information. udevd system service, it runs when block device appears in the system. CSI plugin cannot find the disk, because in does not exist on the system yet. Probably the system so slow, that cannot appears in proper time.

So, zpool requires a lot of optimization and CPU/RAM resources. If you server so small - use LVM.

trunet commented 1 year ago

It's an intel avoton 8-core 2.4Ghz with 16GB RAM, I allocated 14GB and 6 cores to my talos instance (it's sole tenant for this hardware). It's not that slow, the I/O can be slow due to spinning disk.

In any case, I used cstor on this before and ran fine. I wanted to simplify and reduce the storage resources on my cluster by migrating to this CSI, which is a great idea.

Having said that, I agree it's not the best hardware, but the CSI should still behave even if it's slow, shouldn't just hang or throw an irrecoverable error which a restart fix it. The "proper time" is whatever the software wait for, it should wait for something and not assume that it's there if it's still not. A state machine or something in these lines could make it nicer with slower hardware.

zimmertr commented 1 year ago

I am using this project for my homelab which is a 22 core Xeon E5-2699V4 build with 512GB of DDR-4-2400 ECC SDRAM and two ZFS Pools. One is a 6x6TB 7200RPM HDD pool presented as 16TB RAID 10 and the other is a 2x1TB Samsung 970 pro NVMe SSD pool presented as 1TB RAID1.

Happy to run any benchmarks/tests if it can help development. Or answer any feasibility questions about ZFS that anyone has. I do think I have pretty production-grade hardware despite shoving it all in a small Node 804 chassis.

Since I got this CSI plugin running, I'm using it successfully with 7 separate statically provisioned ZVols without any problems. It's awesome! I intend to make great use of it and at least double that number in the coming months as I rebuild things.

After a couple years of burnout, homelabbing feels fun again. In part, because of this very project! 😄

trunet commented 1 year ago

Very nice setup. Unfortunately I don't have the money for all that.

Another odd thing that happened while I'm trying to manage it: I got my plex mounting the disk belonging to nextcloud. Have no idea how that happened, but it happed.

root@plex-plex-media-server-0:~# ls -lha
total 2.0M
drwxr-xr-x 15 plex users 4.0K Oct 26 03:04 .
drwxr-xr-x  1 root root   115 Oct 26 16:49 ..
-rw-r--r--  1 plex users 3.2K Oct 26 03:04 .htaccess
-rw-r--r--  1 plex users  101 Oct 26 03:04 .user.ini
drwxr-xr-x 45 plex users 4.0K Oct 26 03:04 3rdparty
-rw-r--r--  1 plex users  24K Oct 26 03:04 AUTHORS
-rw-r--r--  1 plex users  34K Oct 26 03:04 COPYING
drwxrwxrwx  3 plex users 4.0K Sep 21 00:57 Library
drwxr-xr-x 51 plex users 4.0K Oct 26 03:04 apps
-rw-r--r--  1 plex users 1.3K Oct 26 03:04 composer.json
-rw-r--r--  1 plex users 3.1K Oct 26 03:04 composer.lock
drwxr-xr-x  2 plex users 4.0K Oct 26 04:04 config
-rw-r--r--  1 plex users 4.0K Oct 26 03:04 console.php
drwxr-xr-x 24 plex users 4.0K Oct 26 03:04 core
-rw-r--r--  1 plex users 6.2K Oct 26 03:04 cron.php
drwxr-xr-x  2 plex users 4.0K Oct 26 03:04 custom_apps
drwxr-xr-x  2 plex users 4.0K Oct 26 03:03 data
drwxr-xr-x  2 plex users  12K Oct 26 03:04 dist
-rw-r--r--  1 plex users  156 Oct 26 03:04 index.html
-rw-r--r--  1 plex users 3.4K Oct 26 03:04 index.php
drwxr-xr-x  6 plex users 4.0K Oct 26 03:04 lib
-rw-r--r--  1 plex users    0 Oct 26 03:58 nextcloud-init-sync.lock
-rwxr-xr-x  1 plex users  283 Oct 26 03:04 occ
drwxr-xr-x  2 plex users 4.0K Oct 26 03:04 ocs
drwxr-xr-x  2 plex users 4.0K Oct 26 03:04 ocs-provider
-rw-r--r--  1 plex users 1.8M Oct 26 03:04 package-lock.json
-rw-r--r--  1 plex users 6.2K Oct 26 03:04 package.json
-rw-r--r--  1 plex users 3.2K Oct 26 03:04 public.php
-rw-r--r--  1 plex users 5.5K Oct 26 03:04 remote.php
drwxr-xr-x  4 plex users 4.0K Oct 26 03:04 resources
-rw-r--r--  1 plex users   26 Oct 26 03:04 robots.txt
-rw-r--r--  1 plex users 2.4K Oct 26 03:04 status.php
drwxr-xr-x  3 plex users 4.0K Oct 26 03:04 themes
-rw-r--r--  1 plex users  403 Oct 26 03:04 version.php
dbiegunski commented 1 year ago

I have the same issue.

I1108 15:32:58.323001 1 controller.go:210] Started VA processing "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:32:58.323023 1 csi_handler.go:224] CSIHandler: processing VA "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:32:58.323030 1 csi_handler.go:251] Attaching "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:32:58.323035 1 csi_handler.go:421] Starting attach operation for "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:32:58.323062 1 csi_handler.go:341] Adding finalizer to PV "pvc-818277ad-43c3-4885-a00d-98e48438d1c1" I1108 15:32:58.329107 1 csi_handler.go:350] PV finalizer added to "pvc-818277ad-43c3-4885-a00d-98e48438d1c1" I1108 15:32:58.329129 1 csi_handler.go:740] Found NodeID worker1 in CSINode worker1 I1108 15:32:58.329143 1 csi_handler.go:312] VA finalizer added to "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:32:58.329161 1 csi_handler.go:326] NodeID annotation added to "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:32:58.333913 1 connection.go:193] GRPC call: /csi.v1.Controller/ControllerPublishVolume I1108 15:32:58.333928 1 connection.go:194] GRPC request: {"node_id":"worker1","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"ssd":"true","storage":"local-lvm","storage.kubernetes.io/csiProvisionerIdentity":"1699457083238-1451-csi.proxmox.sinextra.dev"},"volume_id":"pve-cluster/pve/local-lvm/vm-9999-pvc-818277ad-43c3-4885-a00d-98e48438d1c1"} I1108 15:33:00.845327 1 connection.go:200] GRPC response: {"publish_context":{"DevicePath":"/dev/disk/by-id/wwn-0x5056432d49443031","lun":"1"}} I1108 15:33:00.845345 1 connection.go:201] GRPC error: I1108 15:33:00.845353 1 csi_handler.go:264] Attached "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:33:00.845359 1 util.go:38] Marking as attached "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:33:00.852475 1 util.go:52] Marked as attached "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:33:00.852489 1 csi_handler.go:270] Fully attached "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:33:00.852495 1 csi_handler.go:240] CSIHandler: finished processing "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:33:00.852511 1 controller.go:210] Started VA processing "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:33:00.852515 1 csi_handler.go:224] CSIHandler: processing VA "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:33:00.852520 1 csi_handler.go:246] "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" is already attached I1108 15:33:00.852525 1 csi_handler.go:240] CSIHandler: finished processing "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5"

AttachVolume.Attach succeeded for volume "pvc-818277ad-43c3-4885-a00d-98e48438d1c1"

MountVolume.MountDevice failed for volume "pvc-818277ad-43c3-4885-a00d-98e48438d1c1" : rpc error: code = Internal desc = format of disk "/dev/disk/by-id/wwn-0x5056432d49443031" failed: type:("ext4") target:("/var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/c9835a11dc5c332d0f8faf1c92571b17dd00f5612b2ee81c36d5852f6b88f8d4/globalmount") options:("noatime,defaults") errcode:(exit status 1) output:(mke2fs 1.47.0 (5-Feb-2023) The file /dev/disk/by-id/wwn-0x5056432d49443031 does not exist and no size was specified. )

[root@worker1 by-id]# ls -l /var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/ total 0

[root@worker1 by-id]# ls -l /dev/disk/by-id/wwn-0x5056432d49443031 ls: cannot access '/dev/disk/by-id/wwn-0x5056432d49443031': No such file or directory

Any reason why the directory would be empty on my node ?

trunet commented 1 year ago

Unfortunately I stopped using this CSI. I moved to openebs zfs-localpv. It's working flawlessly now, and the only downside is that you need to over provision a disk to be used by your k8s cluster. The plus side is that this CSI creates disks with different IDs than the VM (9999) and it's not replicated if you have replication configured. With a normal attached disk it's.

sergelogvinov commented 1 year ago

Yep, zfs-localpv is good project, I used to use it. ZFS uses node memory instead proxmox host memory, add more reserve memory in kubelet config. --system-reserved

sergelogvinov commented 1 year ago

I have the same issue.

I1108 15:32:58.323001 1 controller.go:210] Started VA processing "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:32:58.323023 1 csi_handler.go:224] CSIHandler: processing VA "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:32:58.323030 1 csi_handler.go:251] Attaching "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:32:58.323035 1 csi_handler.go:421] Starting attach operation for "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:32:58.323062 1 csi_handler.go:341] Adding finalizer to PV "pvc-818277ad-43c3-4885-a00d-98e48438d1c1" I1108 15:32:58.329107 1 csi_handler.go:350] PV finalizer added to "pvc-818277ad-43c3-4885-a00d-98e48438d1c1" I1108 15:32:58.329129 1 csi_handler.go:740] Found NodeID worker1 in CSINode worker1 I1108 15:32:58.329143 1 csi_handler.go:312] VA finalizer added to "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:32:58.329161 1 csi_handler.go:326] NodeID annotation added to "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:32:58.333913 1 connection.go:193] GRPC call: /csi.v1.Controller/ControllerPublishVolume I1108 15:32:58.333928 1 connection.go:194] GRPC request: {"node_id":"worker1","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"ssd":"true","storage":"local-lvm","storage.kubernetes.io/csiProvisionerIdentity":"1699457083238-1451-csi.proxmox.sinextra.dev"},"volume_id":"pve-cluster/pve/local-lvm/vm-9999-pvc-818277ad-43c3-4885-a00d-98e48438d1c1"} I1108 15:33:00.845327 1 connection.go:200] GRPC response: {"publish_context":{"DevicePath":"/dev/disk/by-id/wwn-0x5056432d49443031","lun":"1"}} I1108 15:33:00.845345 1 connection.go:201] GRPC error: I1108 15:33:00.845353 1 csi_handler.go:264] Attached "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:33:00.845359 1 util.go:38] Marking as attached "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:33:00.852475 1 util.go:52] Marked as attached "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:33:00.852489 1 csi_handler.go:270] Fully attached "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:33:00.852495 1 csi_handler.go:240] CSIHandler: finished processing "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:33:00.852511 1 controller.go:210] Started VA processing "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:33:00.852515 1 csi_handler.go:224] CSIHandler: processing VA "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" I1108 15:33:00.852520 1 csi_handler.go:246] "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5" is already attached I1108 15:33:00.852525 1 csi_handler.go:240] CSIHandler: finished processing "csi-3ab0047d2d457e8bf2009bc91fc88d63ae4137bc75b4456f2fbb4b5592220de5"

AttachVolume.Attach succeeded for volume "pvc-818277ad-43c3-4885-a00d-98e48438d1c1"

MountVolume.MountDevice failed for volume "pvc-818277ad-43c3-4885-a00d-98e48438d1c1" : rpc error: code = Internal desc = format of disk "/dev/disk/by-id/wwn-0x5056432d49443031" failed: type:("ext4") target:("/var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/c9835a11dc5c332d0f8faf1c92571b17dd00f5612b2ee81c36d5852f6b88f8d4/globalmount") options:("noatime,defaults") errcode:(exit status 1) output:(mke2fs 1.47.0 (5-Feb-2023) The file /dev/disk/by-id/wwn-0x5056432d49443031 does not exist and no size was specified. )

[root@worker1 by-id]# ls -l /var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/ total 0

[root@worker1 by-id]# ls -l /dev/disk/by-id/wwn-0x5056432d49443031 ls: cannot access '/dev/disk/by-id/wwn-0x5056432d49443031': No such file or directory

Any reason why the directory would be empty on my node ?

Can you check

  1. kubernetes node dmesg logs
  2. udev logs - /dev/disk/by-id/wwn-0x5056432d49443031 is a link to the real device, udev creates it.
  3. proxmox vm config
Jirom-1 commented 8 months ago

I'm getting the same problem as well.

Here are the logs from the "proxmox-csi-plugin-node" pod

I0305 22:25:36.453373 1 node.go:510] NodeGetCapabilities: called with args {} I0305 22:25:36.462955 1 node.go:510] NodeGetCapabilities: called with args {} I0305 22:25:36.464799 1 node.go:510] NodeGetCapabilities: called with args {} I0305 22:25:36.467061 1 node.go:89] NodeStageVolume: called with args {"publish_context":{"DevicePath":"/dev/disk/by-id/wwn-0x5056432d49443032","lun":"2"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/e562097457e98f8f0adab818185529bbd3cc6727b17d34162300db9bfadf6247/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"ssd":"false","storage":"Drives-6","storage.kubernetes.io/csiProvisionerIdentity":"1709676136514-6639-csi.proxmox.sinextra.dev"},"volume_id":"PALITRONICA-PVE/palitronica-6/Drives-6/vm-9999-pvc-fe8bc67c-296a-404b-9ee4-527db00a5a87"} I0305 22:25:36.467178 1 mount_linux.go:577] Attempting to determine if disk "/dev/disk/by-id/wwn-0x5056432d49443032" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/by-id/wwn-0x5056432d49443032]) I0305 22:25:36.470260 1 mount_linux.go:580] Output: "" I0305 22:25:36.470314 1 mount_linux.go:515] Disk "/dev/disk/by-id/wwn-0x5056432d49443032" appears to be unformatted, attempting to format as type: "ext4" with options: [-F -m0 /dev/disk/by-id/wwn-0x5056432d49443032] E0305 22:25:36.475762 1 mount_linux.go:522] format of disk "/dev/disk/by-id/wwn-0x5056432d49443032" failed: type:("ext4") target:("/var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/e562097457e98f8f0adab818185529bbd3cc6727b17d34162300db9bfadf6247/globalmount") options:("defaults") errcode:(exit status 1) output:(mke2fs 1.47.0 (5-Feb-2023) The file /dev/disk/by-id/wwn-0x5056432d49443032 does not exist and no size was specified. ) E0305 22:25:36.475825 1 node.go:204] NodeStageVolume: failed to mount device /dev/disk/by-id/wwn-0x5056432d49443032 at /var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/e562097457e98f8f0adab818185529bbd3cc6727b17d34162300db9bfadf6247/globalmount (fstype: ext4), error: format of disk "/dev/disk/by-id/wwn-0x5056432d49443032" failed: type:("ext4") target:("/var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/e562097457e98f8f0adab818185529bbd3cc6727b17d34162300db9bfadf6247/globalmount") options:("defaults") errcode:(exit status 1) output:(mke2fs 1.47.0 (5-Feb-2023) The file /dev/disk/by-id/wwn-0x5056432d49443032 does not exist and no size was specified. ) E0305 22:25:36.475902 1 main.go:122] GRPC error: rpc error: code = Internal desc = format of disk "/dev/disk/by-id/wwn-0x5056432d49443032" failed: type:("ext4") target:("/var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/e562097457e98f8f0adab818185529bbd3cc6727b17d34162300db9bfadf6247/globalmount") options:("defaults") errcode:(exit status 1) output:(mke2fs 1.47.0 (5-Feb-2023) The file /dev/disk/by-id/wwn-0x5056432d49443032 does not exist and no size was specified.

sergelogvinov commented 8 months ago

Yep, based on this logs - it definitely did not find the block device.

Can you check the dmesg logs and block device manualy like ls -laR /dev/disk/ on the host machine. And what the OS do you use?

Jirom-1 commented 8 months ago

`$ ls -laR /dev/disk /dev/disk: total 0 drwxr-xr-x 7 root root 140 Feb 29 18:03 . drwxr-xr-x 19 root root 3440 Mar 4 11:54 .. drwxr-xr-x 2 root root 260 Feb 29 18:34 by-diskseq drwxr-xr-x 2 root root 260 Mar 1 10:57 by-id drwxr-xr-x 2 root root 100 Feb 29 18:03 by-partuuid drwxr-xr-x 2 root root 180 Feb 29 18:34 by-path drwxr-xr-x 2 root root 120 Mar 1 10:57 by-uuid

/dev/disk/by-diskseq: total 0 drwxr-xr-x 2 root root 260 Feb 29 18:34 . drwxr-xr-x 7 root root 140 Feb 29 18:03 .. lrwxrwxrwx 1 root root 9 Feb 29 18:03 10 -> ../../sda lrwxrwxrwx 1 root root 9 Feb 29 18:34 11 -> ../../sdb lrwxrwxrwx 1 root root 9 Feb 29 18:03 12 -> ../../sr0 lrwxrwxrwx 1 root root 11 Feb 29 18:03 15 -> ../../loop0 lrwxrwxrwx 1 root root 11 Feb 29 18:03 16 -> ../../loop1 lrwxrwxrwx 1 root root 11 Feb 29 18:03 17 -> ../../loop2 lrwxrwxrwx 1 root root 11 Feb 29 18:03 18 -> ../../loop3 lrwxrwxrwx 1 root root 11 Feb 29 18:03 19 -> ../../loop4 lrwxrwxrwx 1 root root 11 Feb 29 18:03 20 -> ../../loop5 lrwxrwxrwx 1 root root 11 Feb 29 18:03 21 -> ../../loop6 lrwxrwxrwx 1 root root 11 Feb 29 18:03 22 -> ../../loop7

/dev/disk/by-id: total 0 drwxr-xr-x 2 root root 260 Mar 1 10:57 . drwxr-xr-x 7 root root 140 Feb 29 18:03 .. lrwxrwxrwx 1 root root 9 Feb 29 18:03 ata-QEMU_DVD-ROM_QM00003 -> ../../sr0 lrwxrwxrwx 1 root root 10 Feb 29 18:03 dm-name-debian--vm--template--vg-root -> ../../dm-0 lrwxrwxrwx 1 root root 10 Mar 1 10:57 dm-name-debian--vm--template--vg-swap_1 -> ../../dm-1 lrwxrwxrwx 1 root root 10 Mar 1 10:57 dm-uuid-LVM-QVfmUMRpVqVcoAhXX0QqqJZ1N6aPKg6qlX1Po5uxXBlREyGRJ2oTKzjQklJFwJtd -> ../../dm-1 lrwxrwxrwx 1 root root 10 Feb 29 18:03 dm-uuid-LVM-QVfmUMRpVqVcoAhXX0QqqJZ1N6aPKg6qwjOf7vAkEqnT7fdHIOW9ObKVIk7tnqeD -> ../../dm-0 lrwxrwxrwx 1 root root 10 Feb 29 18:03 lvm-pv-uuid-yP9xeQ-ahRk-iVUl-WW7z-wClO-hPxe-NEnaLf -> ../../sda5 lrwxrwxrwx 1 root root 9 Feb 29 18:03 scsi-0QEMU_QEMU_HARDDISK_drive-scsi0 -> ../../sda lrwxrwxrwx 1 root root 10 Feb 29 18:03 scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 Feb 29 18:03 scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part2 -> ../../sda2 lrwxrwxrwx 1 root root 10 Feb 29 18:03 scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part5 -> ../../sda5 lrwxrwxrwx 1 root root 9 Feb 29 18:34 scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 -> ../../sdb

/dev/disk/by-partuuid: total 0 drwxr-xr-x 2 root root 100 Feb 29 18:03 . drwxr-xr-x 7 root root 140 Feb 29 18:03 .. lrwxrwxrwx 1 root root 10 Feb 29 18:03 bd7fc6d3-01 -> ../../sda1 lrwxrwxrwx 1 root root 10 Feb 29 18:03 bd7fc6d3-02 -> ../../sda2 lrwxrwxrwx 1 root root 10 Feb 29 18:03 bd7fc6d3-05 -> ../../sda5

/dev/disk/by-path: total 0 drwxr-xr-x 2 root root 180 Feb 29 18:34 . drwxr-xr-x 7 root root 140 Feb 29 18:03 .. lrwxrwxrwx 1 root root 9 Feb 29 18:03 pci-0000:00:01.1-ata-2 -> ../../sr0 lrwxrwxrwx 1 root root 9 Feb 29 18:03 pci-0000:00:01.1-ata-2.0 -> ../../sr0 lrwxrwxrwx 1 root root 9 Feb 29 18:03 pci-0000:00:05.0-scsi-0:0:0:0 -> ../../sda lrwxrwxrwx 1 root root 10 Feb 29 18:03 pci-0000:00:05.0-scsi-0:0:0:0-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 Feb 29 18:03 pci-0000:00:05.0-scsi-0:0:0:0-part2 -> ../../sda2 lrwxrwxrwx 1 root root 10 Feb 29 18:03 pci-0000:00:05.0-scsi-0:0:0:0-part5 -> ../../sda5 lrwxrwxrwx 1 root root 9 Feb 29 18:34 pci-0000:00:05.0-scsi-0:0:1:0 -> ../../sdb

/dev/disk/by-uuid: total 0 drwxr-xr-x 2 root root 120 Mar 1 10:57 . drwxr-xr-x 7 root root 140 Feb 29 18:03 .. lrwxrwxrwx 1 root root 9 Feb 29 18:34 3a5a98d7-1087-469c-af22-83886f3a0474 -> ../../sdb lrwxrwxrwx 1 root root 10 Feb 29 18:03 6daf9271-24dc-4ee4-a2e9-8a90dbd1d614 -> ../../sda1 lrwxrwxrwx 1 root root 10 Mar 1 10:57 8812a609-2661-4eb0-b58c-66b9dab7cd01 -> ../../dm-1 lrwxrwxrwx 1 root root 10 Feb 29 18:03 c21bd2fc-4fd8-4057-b194-4836215dafcf -> ../../dm-0 `

I'm using Debian 12

Jirom-1 commented 8 months ago

These are some of the dmesg logs

[ 10.854299] systemd[1]: Detected virtualization kvm. [ 10.854308] systemd[1]: Detected architecture x86-64. [ 10.922523] systemd[1]: Hostname set to <worker-node-1>. [ 12.378386] systemd[1]: Queued start job for default target graphical.target. [ 12.404137] systemd[1]: Created slice system-getty.slice - Slice /system/getty. [ 12.405007] systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. [ 12.405823] systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. [ 12.406478] systemd[1]: Created slice user.slice - User and Session Slice. [ 12.406653] systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. [ 12.407093] systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. [ 12.407154] systemd[1]: Expecting device dev-disk-by\x2duuid-6daf9271\x2d24dc\x2d4ee4\x2da2e9\x2d8a90dbd1d614.device - /dev/disk/by-uuid/6daf9271-24dc-4ee4-a2e9-8a90dbd1d614... [ 12.407173] systemd[1]: Expecting device dev-mapper-debian\x2d\x2dvm\x2d\x2dtemplate\x2d\x2dvg\x2dswap_1.device - /dev/mapper/debian--vm--template--vg-swap_1... [ 12.407230] systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. [ 12.407292] systemd[1]: Reached target nss-user-lookup.target - User and Group Name Lookups. [ 12.407323] systemd[1]: Reached target remote-fs.target - Remote File Systems. [ 12.407349] systemd[1]: Reached target slices.target - Slice Units. [ 12.407413] systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. [ 12.407610] systemd[1]: Listening on dm-event.socket - Device-mapper event daemon FIFOs. [ 12.407943] systemd[1]: Listening on lvm2-lvmpolld.socket - LVM2 poll daemon socket. [ 12.408175] systemd[1]: Listening on systemd-fsckd.socket - fsck to fsckd communication Socket. [ 12.408314] systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. [ 12.409309] systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. [ 12.409627] systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). [ 12.409924] systemd[1]: Listening on systemd-journald.socket - Journal Socket. [ 12.422477] systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. [ 12.422765] systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. [ 12.424858] systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... [ 12.427297] systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... [ 12.429802] systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... [ 12.433069] systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... [ 12.433581] systemd[1]: Finished blk-availability.service - Availability of block devices. [ 12.437050] systemd[1]: Starting keyboard-setup.service - Set the console keyboard layout... [ 12.439979] systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... [ 12.443005] systemd[1]: Starting lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... [ 12.445762] systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... [ 12.448312] systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... [ 12.450983] systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... [ 12.453814] systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... [ 12.456206] systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... [ 12.459098] systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... [ 12.459391] systemd[1]: systemd-fsck-root.service - File System Check on Root Device was skipped because of an unmet condition check (ConditionPathExists=!/run/initramfs/fsck-root). [ 12.463426] systemd[1]: Starting systemd-journald.service - Journal Service... [ 12.467440] systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... [ 12.470133] systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... [ 12.472795] systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... [ 12.623066] systemd[1]: modprobe@dm_mod.service: Deactivated successfully. [ 12.623447] systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. [ 12.624271] systemd[1]: modprobe@drm.service: Deactivated successfully. [ 12.624612] systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. [ 12.659932] loop: module loaded [ 12.661257] systemd[1]: modprobe@loop.service: Deactivated successfully. [ 12.661617] systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. [ 12.661997] systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. [ 12.685911] systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. [ 12.686278] systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. [ 12.706386] systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. [ 12.823481] systemd[1]: modprobe@configfs.service: Deactivated successfully. [ 12.823863] systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. [ 12.858349] systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... [ 12.858799] systemd[1]: Started systemd-journald.service - Journal Service. [ 12.889701] fuse: init (API version 7.37) [ 12.919651] EXT4-fs (dm-0): re-mounted. Quota mode: none. [ 13.003347] lp: driver loaded but no devices found [ 13.016929] ppdev: user-space parallel port driver [ 13.022089] systemd-journald[300]: Received client request to flush runtime journal. [ 16.282146] input: PC Speaker as /devices/platform/pcspkr/input/input6 [ 16.411167] RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer [ 16.674557] cryptd: max_cpu_qlen set to 1000

sergelogvinov commented 8 months ago

Thank you for the report. Yep, csi demands the udev rules which creates the /dev/disk/by-id/wwn-XXXX links. (Probably they was removed in debian 12)

We have a plan to break the udev dependency. We will do it in nearest future.

Thanks.

sergelogvinov commented 8 months ago

@Jirom-1 Try to edge version, I hope it will work now.

Jirom-1 commented 8 months ago

Thanks @sergelogvinov, how do I do this? I installed the plugin with the helm chart. Did you publish a newer version? I tried redeploying and I'm still getting the same errors.

sergelogvinov commented 8 months ago

@Jirom-1

Part of the helm values:

controller:
  plugin:
    image:
      pullPolicy: Always
      tag: edge

node:
  plugin:
    image:
      pullPolicy: Always
      tag: edge

Probably you need to delete all pods after deploy kubectl -n csi-proxmox delete po --all

Jirom-1 commented 8 months ago

I get these errors now.

`Defaulted container "proxmox-csi-plugin-node" out of: proxmox-csi-plugin-node, csi-node-driver-registrar, liveness-probe I0315 14:41:50.094695 1 main.go:54] Driver version 0.4.0, GitVersion edge, GitCommit 41b19bd I0315 14:41:50.094916 1 main.go:55] Driver CSI Spec version: 1.9.0 I0315 14:41:50.094935 1 main.go:83] Building kube configs for running in cluster... I0315 14:41:50.284324 1 mount_linux.go:282] Detected umount with safe 'not mounted' behavior I0315 14:41:50.284546 1 main.go:140] Listening for connection on address: &net.UnixAddr{Name:"/csi/csi.sock", Net:"unix"} I0315 14:42:03.713784 1 identity.go:38] GetPluginInfo: called I0315 14:42:03.972621 1 node.go:532] NodeGetInfo: called with args {} I0315 14:42:12.566062 1 identity.go:38] GetPluginInfo: called I0315 14:51:39.737034 1 node.go:512] NodeGetCapabilities: called with args {} I0315 14:51:39.746649 1 node.go:512] NodeGetCapabilities: called with args {} I0315 14:51:39.748586 1 node.go:512] NodeGetCapabilities: called with args {} I0315 14:51:39.751580 1 node.go:89] NodeStageVolume: called with args {"publish_context":{"DevicePath":"/dev/disk/by-id/wwn-0x5056432d49443032","lun":"2"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/30bcf9339e68058f1af9bcddcf5cd4e7dca1ba774a9024ca68a887dfc0771d16/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"ssd":"false","storage":"Drives-6","storage.kubernetes.io/csiProvisionerIdentity":"1710514073048-4415-csi.proxmox.sinextra.dev"},"volume_id":"PALITRONICA-PVE/palitronica-6/Drives-6/vm-9999-pvc-a68b1593-72b3-4e90-ac98-bff5f57535a8"} E0315 14:51:49.753861 1 node.go:113] NodePublishVolume: failed to get device path, error: device /dev/disk/by-id/wwn-0x5056432d49443032 is not found E0315 14:51:49.753962 1 main.go:122] GRPC error: rpc error: code = InvalidArgument desc = device /dev/disk/by-id/wwn-0x5056432d49443032 is not found I0315 14:51:50.324758 1 node.go:512] NodeGetCapabilities: called with args {} I0315 14:51:50.333370 1 node.go:512] NodeGetCapabilities: called with args {} I0315 14:51:50.335298 1 node.go:512] NodeGetCapabilities: called with args {} I0315 14:51:50.337513 1 node.go:89] NodeStageVolume: called with args {"publish_context":{"DevicePath":"/dev/disk/by-id/wwn-0x5056432d49443032","lun":"2"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/30bcf9339e68058f1af9bcddcf5cd4e7dca1ba774a9024ca68a887dfc0771d16/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"ssd":"false","storage":"Drives-6","storage.kubernetes.io/csiProvisionerIdentity":"1710514073048-4415-csi.proxmox.sinextra.dev"},"volume_id":"PALITRONICA-PVE/palitronica-6/Drives-6/vm-9999-pvc-a68b1593-72b3-4e90-ac98-bff5f57535a8"} E0315 14:52:00.338923 1 node.go:113] NodePublishVolume: failed to get device path, error: device /dev/disk/by-id/wwn-0x5056432d49443032 is not found E0315 14:52:00.338968 1 main.go:122] GRPC error: rpc error: code = InvalidArgument desc = device /dev/disk/by-id/wwn-0x5056432d49443032 is not found I0315 14:52:01.427134 1 node.go:512] NodeGetCapabilities: called with args {} I0315 14:52:01.436306 1 node.go:512] NodeGetCapabilities: called with args {} I0315 14:52:01.438038 1 node.go:512] NodeGetCapabilities: called with args {} I0315 14:52:01.440148 1 node.go:89] NodeStageVolume: called with args {"publish_context":{"DevicePath":"/dev/disk/by-id/wwn-0x5056432d49443032","lun":"2"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/30bcf9339e68058f1af9bcddcf5cd4e7dca1ba774a9024ca68a887dfc0771d16/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"ssd":"false","storage":"Drives-6","storage.kubernetes.io/csiProvisionerIdentity":"1710514073048-4415-csi.proxmox.sinextra.dev"},"volume_id":"PALITRONICA-PVE/palitronica-6/Drives-6/vm-9999-pvc-a68b1593-72b3-4e90-ac98-bff5f57535a8"} E0315 14:52:11.440984 1 node.go:113] NodePublishVolume: failed to get device path, error: device /dev/disk/by-id/wwn-0x5056432d49443032 is not found E0315 14:52:11.441110 1 main.go:122] GRPC error: rpc error: code = InvalidArgument desc = device /dev/disk/by-id/wwn-0x5056432d49443032 is not found I0315 14:52:13.552609 1 node.go:512] NodeGetCapabilities: called with args {} I0315 14:52:13.560640 1 node.go:512] NodeGetCapabilities: called with args {} I0315 14:52:13.562295 1 node.go:512] NodeGetCapabilities: called with args {} I0315 14:52:13.564125 1 node.go:89] NodeStageVolume: called with args {"publish_context":{"DevicePath":"/dev/disk/by-id/wwn-0x5056432d49443032","lun":"2"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/30bcf9339e68058f1af9bcddcf5cd4e7dca1ba774a9024ca68a887dfc0771d16/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"ssd":"false","storage":"Drives-6","storage.kubernetes.io/csiProvisionerIdentity":"1710514073048-4415-csi.proxmox.sinextra.dev"},"volume_id":"PALITRONICA-PVE/palitronica-6/Drives-6/vm-9999-pvc-a68b1593-72b3-4e90-ac98-bff5f57535a8"} E0315 14:52:23.565390 1 node.go:113] NodePublishVolume: failed to get device path, error: device /dev/disk/by-id/wwn-0x5056432d49443032 is not found E0315 14:52:23.565697 1 main.go:122] GRPC error: rpc error: code = InvalidArgument desc = device /dev/disk/by-id/wwn-0x5056432d49443032 is not found I0315 14:52:27.892389 1 node.go:512] NodeGetCapabilities: called with args {} I0315 14:52:27.901085 1 node.go:512] NodeGetCapabilities: called with args {} I0315 14:52:27.902762 1 node.go:512] NodeGetCapabilities: called with args {} I0315 14:52:27.904735 1 node.go:89] NodeStageVolume: called with args {"publish_context":{"DevicePath":"/dev/disk/by-id/wwn-0x5056432d49443032","lun":"2"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/30bcf9339e68058f1af9bcddcf5cd4e7dca1ba774a9024ca68a887dfc0771d16/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"ssd":"false","storage":"Drives-6","storage.kubernetes.io/csiProvisionerIdentity":"1710514073048-4415-csi.proxmox.sinextra.dev"},"volume_id":"PALITRONICA-PVE/palitronica-6/Drives-6/vm-9999-pvc-a68b1593-72b3-4e90-ac98-bff5f57535a8"} E0315 14:52:37.906001 1 node.go:113] NodePublishVolume: failed to get device path, error: device /dev/disk/by-id/wwn-0x5056432d49443032 is not found E0315 14:52:37.906078 1 main.go:122] GRPC error: rpc error: code = InvalidArgument desc = device /dev/disk/by-id/wwn-0x5056432d49443032 is not found I0315 14:52:46.308251 1 node.go:512] NodeGetCapabilities: called with args {} I0315 14:52:46.317571 1 node.go:512] NodeGetCapabilities: called with args {} I0315 14:52:46.319429 1 node.go:512] NodeGetCapabilities: called with args {} I0315 14:52:46.321892 1 node.go:89] NodeStageVolume: called with args {"publish_context":{"DevicePath":"/dev/disk/by-id/wwn-0x5056432d49443032","lun":"2"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/30bcf9339e68058f1af9bcddcf5cd4e7dca1ba774a9024ca68a887dfc0771d16/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"ssd":"false","storage":"Drives-6","storage.kubernetes.io/csiProvisionerIdentity":"1710514073048-4415-csi.proxmox.sinextra.dev"},"volume_id":"PALITRONICA-PVE/palitronica-6/Drives-6/vm-9999-pvc-a68b1593-72b3-4e90-ac98-bff5f57535a8"} E0315 14:52:56.323728 1 node.go:113] NodePublishVolume: failed to get device path, error: device /dev/disk/by-id/wwn-0x5056432d49443032 is not found E0315 14:52:56.323817 1 main.go:122] GRPC error: rpc error: code = InvalidArgument desc = device /dev/disk/by-id/wwn-0x5056432d49443032 is not found I0315 14:53:12.344729 1 node.go:512] NodeGetCapabilities: called with args {} I0315 14:53:12.354091 1 node.go:512] NodeGetCapabilities: called with args {} I0315 14:53:12.356277 1 node.go:512] NodeGetCapabilities: called with args {} I0315 14:53:12.358264 1 node.go:89] NodeStageVolume: called with args {"publish_context":{"DevicePath":"/dev/disk/by-id/wwn-0x5056432d49443032","lun":"2"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/30bcf9339e68058f1af9bcddcf5cd4e7dca1ba774a9024ca68a887dfc0771d16/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"ssd":"false","storage":"Drives-6","storage.kubernetes.io/csiProvisionerIdentity":"1710514073048-4415-csi.proxmox.sinextra.dev"},"volume_id":"PALITRONICA-PVE/palitronica-6/Drives-6/vm-9999-pvc-a68b1593-72b3-4e90-ac98-bff5f57535a8"}

E0315 14:53:22.360138 1 node.go:113] NodePublishVolume: failed to get device path, error: device /dev/disk/by-id/wwn-0x5056432d49443032 is not found E0315 14:53:22.360192 1 main.go:122] GRPC error: rpc error: code = InvalidArgument desc = device /dev/disk/by-id/wwn-0x5056432d49443032 is not found I0315 14:53:54.643960 1 node.go:512] NodeGetCapabilities: called with args {} I0315 14:53:54.654332 1 node.go:512] NodeGetCapabilities: called with args {} I0315 14:53:54.657120 1 node.go:512] NodeGetCapabilities: called with args {} I0315 14:53:54.660711 1 node.go:89] NodeStageVolume: called with args {"publish_context":{"DevicePath":"/dev/disk/by-id/wwn-0x5056432d49443032","lun":"2"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.proxmox.sinextra.dev/30bcf9339e68058f1af9bcddcf5cd4e7dca1ba774a9024ca68a887dfc0771d16/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"ssd":"false","storage":"Drives-6","storage.kubernetes.io/csiProvisionerIdentity":"1710514073048-4415-csi.proxmox.sinextra.dev"},"volume_id":"PALITRONICA-PVE/palitronica-6/Drives-6/vm-9999-pvc-a68b1593-72b3-4e90-ac98-bff5f57535a8"} E0315 14:54:04.662351 1 node.go:113] NodePublishVolume: failed to get device path, error: device /dev/disk/by-id/wwn-0x5056432d49443032 is not found E0315 14:54:04.662410 1 main.go:122] GRPC error: rpc error: code = InvalidArgument desc = device /dev/disk/by-id/wwn-0x5056432d49443032 is not found `

sergelogvinov commented 8 months ago

Hi, can you show me the proxmox VM config? cat /etc/pve/qemu-server/$ID.conf

github-actions[bot] commented 2 months ago

This issue is stale because it has been open 180 days with no activity. Remove stale label or comment or this will be closed in 14 days.

github-actions[bot] commented 2 months ago

This issue was closed because it has been stalled for 14 days with no activity.