Open nabijaczleweli opened 2 years ago
In the interest of science: I've removed the cache device and rebooted for an unrelated reason today, then added it like this:
$ l /dev/disk/by-partlabel/filling-cache
lrwxrwxrwx 1 root root 15 Mar 13 00:20 /dev/disk/by-partlabel/filling-cache -> ../../nvme0n1p4
# zpool add -nP filling cache filling-cache
would update 'filling' to the following configuration:
filling
mirror-0
/dev/disk/by-id/ata-HGST_HUS726T4TALE6L4_V6K2L4RR-part1
/dev/disk/by-id/ata-HGST_HUS726T4TALE6L4_V6K2MHYR-part1
raidz1-1
/dev/disk/by-id/ata-HGST_HUS728T8TALE6L4_VDKT237K-part1
/dev/disk/by-id/ata-HGST_HUS728T8TALE6L4_VDGY075D-part1
/dev/disk/by-id/ata-HGST_HUS728T8TALE6L4_VDKVRRJK-part1
cache
/dev/disk/by-partlabel/filling-cache
# zpool add filling cache filling-cache
$ zpool list -v filling
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
filling 25.5T 6.58T 18.9T - 64M 1% 25% 1.00x ONLINE -
mirror 3.64T 460G 3.19T - 64M 10% 12.3% - ONLINE
ata-HGST_HUS726T4TALE6L4_V6K2L4RR - - - - 64M - - - ONLINE
ata-HGST_HUS726T4TALE6L4_V6K2MHYR - - - - 64M - - - ONLINE
raidz1 21.8T 6.13T 15.7T - - 0% 28.1% - ONLINE
ata-HGST_HUS728T8TALE6L4_VDKT237K - - - - - - - - ONLINE
ata-HGST_HUS728T8TALE6L4_VDGY075D - - - - - - - - ONLINE
ata-HGST_HUS728T8TALE6L4_VDKVRRJK - - - - - - - - ONLINE
cache - - - - - - - - -
filling-cache 63.0G 450M 62.5G - - 0% 0.69% - ONLINE
$ zpool list -vP filling
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
filling 25.5T 6.58T 18.9T - 64M 1% 25% 1.00x ONLINE -
mirror 3.64T 460G 3.19T - 64M 10% 12.3% - ONLINE
/dev/disk/by-id/ata-HGST_HUS726T4TALE6L4_V6K2L4RR-part1 - - - - 64M - - - ONLINE
/dev/disk/by-id/ata-HGST_HUS726T4TALE6L4_V6K2MHYR-part1 - - - - 64M - - - ONLINE
raidz1 21.8T 6.13T 15.7T - - 0% 28.1% - ONLINE
/dev/disk/by-id/ata-HGST_HUS728T8TALE6L4_VDKT237K-part1 - - - - - - - - ONLINE
/dev/disk/by-id/ata-HGST_HUS728T8TALE6L4_VDGY075D-part1 - - - - - - - - ONLINE
/dev/disk/by-id/ata-HGST_HUS728T8TALE6L4_VDKVRRJK-part1 - - - - - - - - ONLINE
cache - - - - - - - - -
/dev/disk/by-partlabel/filling-cache 63.0G 474M 62.5G - - 0% 0.73% - ONLINE
I'll update this when I next reboot if I remember.
Happened again, I've updated to 2.1.4 in the meantime:
nabijaczleweli@tarta:~$ zpool list -v filling
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
filling 25.5T 6.81T 18.7T - 64M 4% 26% 1.00x ONLINE -
mirror-0 3.64T 495G 3.16T - 64M 16% 13.3% - ONLINE
ata-HGST_HUS726T4TALE6L4_V6K2L4RR - - - - 64M - - - ONLINE
ata-HGST_HUS726T4TALE6L4_V6K2MHYR - - - - 64M - - - ONLINE
raidz1-1 21.8T 6.33T 15.5T - - 2% 29.0% - ONLINE
ata-HGST_HUS728T8TALE6L4_VDKT237K - - - - - - - - ONLINE
ata-HGST_HUS728T8TALE6L4_VDGY075D - - - - - - - - ONLINE
ata-HGST_HUS728T8TALE6L4_VDKVRRJK - - - - - - - - ONLINE
cache - - - - - - - - -
nvme0n1p4 63.0G 39.3G 23.7G - - 0% 62.4% - ONLINE
This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions.
Last import was 2023-01-26.07:19:16 zpool import -aN -o cachefile=none
:
$ zpool list -v filling
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
filling 25.5T 9.21T 16.3T - 64M 8% 36% 1.00x ONLINE -
mirror-0 3.64T 899G 2.76T - 64M 26% 24.1% - ONLINE
ata-HGST_HUS726T4TALE6L4_V6K2L4RR 3.64T - - - 64M - - - ONLINE
ata-HGST_HUS726T4TALE6L4_V6K2MHYR 3.64T - - - 64M - - - ONLINE
raidz1-1 21.8T 8.33T 13.5T - - 6% 38.2% - ONLINE
ata-HGST_HUS728T8TALE6L4_VDKT237K 7.28T - - - - - - - ONLINE
ata-HGST_HUS728T8TALE6L4_VDGY075D 7.28T - - - - - - - ONLINE
ata-HGST_HUS728T8TALE6L4_VDKVRRJK 7.28T - - - - - - - ONLINE
cache - - - - - - - - -
nvme0n1p4 63.0G 59.7G 3.25G - - 0% 94.8% - ONLINE
Replacement still repros. Haven't tried remove/attach; I'll do when next rebooting if I remember.
$ zfs version
zfs-2.1.9-1~bpo11+1
zfs-kmod-2.1.7-1
Experienced the same, including the issues replacing the device. I am on 2.1.12. My device name swapped randomly back in late 2022 at some point, but worked until the block device name changed due to a kernel upgrade.
Not only could I not replace it, just as OP...
# zpool replace tank nvme0n1 "/dev/disk/by-id/nvme-Samsung_SSD_960_PRO_512GB_XXX"
invalid vdev specification
use '-f' to override the following errors:
/dev/disk/by-id/nvme-Samsung_SSD_960_PRO_512GB_XXX-part1 is part of unknown pool 'tank'
Here the "invalid vdev specification" error is nonsensical. it makes you think you need to specify a vdev, which you don't, or you'll get 'too many arguments' like this:
# zpool replace tank cache nvme0n1 "/dev/disk/by-id/nvme-Samsung_SSD_960_PRO_512GB_XXX"
too many arguments
usage:
replace [-fsw] [-o property=value] <pool> <device> [new-device]
# zpool replace tank -f nvme0n1 "/dev/disk/by-id/nvme-Samsung_SSD_960_PRO_512GB_XXX"
cannot replace nvme0n1 with /dev/disk/by-id/nvme-Samsung_SSD_960_PRO_512GB_XXX: device is in use as a cache
... but attempting to remove it threw an error despite actually removing it:
# zpool status
--- the cache device is present, as 'nvme0n1'
# zpool remove tank nvme0n1
cannot remove nvme0n1: no such device in pool
# zpool status
# --- the cache device is now gone, its just the vdevs
I was then able to add it afterward as usual, and for now it shows as being imported via by-id.
Seems like multiple bugs all over the place.
System information
Describe the problem you're observing
After exporting and importing a pool using a
cache
device added as a/dev/disk/by-*
-style name, that name is lost and the/dev
basename is used.zpool replace
ing it works, until the next re-import.The
cache
device here was added asfilling-cache
(under/dev/disk/by-partlabel/
,/dev/disk/by-partlabel/filling-cache -> ../../nvme0n1p4
). On import, I once again gotWhat's worse is that
zpool replace filling nvme0n1p4 filling-cache
errors withForcing the matter yields
The disks are, well, disks. The cache device is part of
Describe how to reproduce the problem
Dunno, I've tried it a few times and never got it to happen. Except on that pool. I've removed the cache and attached it again last time, with the
filling-cache
name, and on this import it reverted again.This isn't a race, since, well, the device, partitions, and names exist in the initrd, and the module for the disks is only loaded in the real root. Plus, the import took from 20:28:19 to 20:41:45 and the import depends on settle, so.