It appears that the OMV-ZFS plugin does not properly support NVMe drives when deleting objects from pools created on these drives when using "by-id" dev aliasing. By analysing how OMVModuleZFSZpool::getDevDisks works, it appears that anything that uses this method (currently, limited to object deletion) won't work properly with these drives, because of a wrong assumption (that drives exist only as /dev/sdXY devices, where X is the drives letter and Y is its partition).
Consider the following scenario:
Create a zpool using an NVMe drive (a basic pool is fine for this demonstration) and "by-id" aliasing
The pool is created properly, no problems here
Try to delete the pool
When deleting, an error occurs with this message:
Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; zpool labelclear /dev/nvme0n11 2>&1' with exit code '1': failed to open /dev/nvme0n11: No such file or directory
As you can see above, plugin is trying to clear the label on /dev/nvme0n11 device, which does not exist (because that not how Linux creates NVMe drives' entries, so far I've seen something like this when dealing with partitions: /dev/nvme0n1p1). The extraneous 1 at the end is appended by OMVModuleZFSZpool::getDevDisks, which appears to do that either when dealing with "by-id" aliases or "by-path" aliases.
To be fair, it appears that this is not entirely plugins fault, as I've been able to find a similar problem in zfsOnLinux's repo here: https://github.com/zfsonlinux/zfs/issues/3478 - looks like pool creation does not properly partition CCISS (and NVMe as reported at the end) drives.
However, even if this issue gets eventually fixed, plugin's ::getDevDisks method is still wrong - it should use nvme<disk_no>p<partition_number> scheme for NVMe drives, not the sd<disk_letter><partition_number> that occurs with regular SATA drives.
This problem (in conjunction with very limited "by-id" regexp capabilities) will also occur when using zpools on encrypted drives (eg. using LUKS).
Tested using:
openmediavault
: 4.1.11openmediavault-zfs
: 4.0.4It appears that the OMV-ZFS plugin does not properly support NVMe drives when deleting objects from pools created on these drives when using "by-id" dev aliasing. By analysing how
OMVModuleZFSZpool::getDevDisks
works, it appears that anything that uses this method (currently, limited to object deletion) won't work properly with these drives, because of a wrong assumption (that drives exist only as/dev/sdXY
devices, whereX
is the drives letter andY
is its partition).Consider the following scenario:
As you can see above, plugin is trying to clear the label on
/dev/nvme0n11
device, which does not exist (because that not how Linux creates NVMe drives' entries, so far I've seen something like this when dealing with partitions:/dev/nvme0n1p1
). The extraneous1
at the end is appended byOMVModuleZFSZpool::getDevDisks
, which appears to do that either when dealing with "by-id" aliases or "by-path" aliases.To be fair, it appears that this is not entirely plugins fault, as I've been able to find a similar problem in zfsOnLinux's repo here: https://github.com/zfsonlinux/zfs/issues/3478 - looks like pool creation does not properly partition CCISS (and NVMe as reported at the end) drives.
However, even if this issue gets eventually fixed, plugin's
::getDevDisks
method is still wrong - it should usenvme<disk_no>p<partition_number>
scheme for NVMe drives, not thesd<disk_letter><partition_number>
that occurs with regular SATA drives.This problem (in conjunction with very limited "by-id" regexp capabilities) will also occur when using zpools on encrypted drives (eg. using LUKS).