openzfs / zfs

OpenZFS on Linux and FreeBSD
https://openzfs.github.io/openzfs-docs
Other
10.65k stars 1.75k forks source link

cosmetic: length of removed vdev id inconsistency and weird additional line-break (awaiting resilver) #11863

Open devZer0 opened 3 years ago

devZer0 commented 3 years ago

just curious

and

not a real bummer, but looks weird/ugly.

Distribution Name | proxmox Distribution Version | pve 6.3 Linux Kernel | 5.4.106-1-pve Architecture | x86_64 ZFS Version | 2.0.4-pve1 SPL Version | 2.0.4-pve1

  pool: zfspool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
    continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Thu Apr  8 19:53:32 2021
    1.29T scanned at 4.20G/s, 40.2G issued at 131M/s, 20.1T total
    0B resilvered, 0.19% done, 1 days 20:39:50 to go
config:

    NAME                        STATE     READ WRITE CKSUM
    zfspool                     DEGRADED     0     0     0
      raidz3-0                  DEGRADED     0     0     0
        sda                     ONLINE       0     0     0
        sdb                     ONLINE       0     0     0
        sdc                     ONLINE       0     0     0
        sdd                     ONLINE       0     0     0
        sde                     ONLINE       0     0     0
        sdf                     ONLINE       0     0     0
        sdg                     ONLINE       0     0     0
        sdh                     ONLINE       0     0     0
        sdi                     ONLINE       0     0     0
        sdj                     ONLINE       0     0     0
        replacing-10            DEGRADED     0     0     0
          8426652289251556154   FAULTED      0     0     0  was /dev/sdk1  (awaiting resilver)
          sdl                   ONLINE       0     0     0
        replacing-11            DEGRADED     0     0     0
          16973839097561849864  UNAVAIL      0     0     0  was /dev/sdm1
          sdk                   ONLINE       0     0     0  (awaiting resilver)
devZer0 commented 3 years ago

i remember some bugreport, where extra "1" is added to the vdev number/guid. not sure if it's related :

https://github.com/openzfs/zfs/issues/8214

on my system, all vdevs with 21 chars (instead of 20 chars) have in common that they start with a "1" :

# zpool status -vg

  pool: zfspool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
    continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Thu Apr  8 19:53:32 2021
    6.92T scanned at 4.27G/s, 193G issued at 119M/s, 20.1T total
    0B resilvered, 0.94% done, 2 days 00:48:40 to go
config:

    NAME                        STATE     READ WRITE CKSUM
    zfspool                     DEGRADED     0     0     0
      240968831886697617        DEGRADED     0     0     0
        1422695628515986972     ONLINE       0     0     0
        16253356281015253291    ONLINE       0     0     0
        1951001619305559488     ONLINE       0     0     0
        13813256813625619551    ONLINE       0     0     0
        8122478024504584581     ONLINE       0     0     0
        12867405114782505395    ONLINE       0     0     0
        10804793770363034089    ONLINE       0     0     0
        3823201433193664703     ONLINE       0     0     0
        4925170332144262982     ONLINE       0     0     0
        16372593413872141637    ONLINE       0     0     0
        8378113705060586287     DEGRADED     0     0     0
          8426652289251556154   FAULTED      0     0     0  was /dev/sdk1  (awaiting resilver)
          7965028280830866775   ONLINE       0     0     0
        12496718773673251249    DEGRADED     0     0     0
          16973839097561849864  UNAVAIL      0     0     0  was /dev/sdm1
          10538017679096062033  ONLINE       0     0     0  (awaiting resilver)

errors: No known data errors
devZer0 commented 3 years ago

now, after a while when resilver scan is finished and resilver actually started, it looks even more inconsistent.

mind, that both disks have been re-attached to the pool after being wiped with wipefs/zpool labelclear - and after re-attaching still didn't work, i did wipe both with "dd if=/dev/zero....". the OS was being reinstalled and the pool was re-imported with the two wiped disks attached.


  pool: zfspool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
    continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Thu Apr  8 19:53:32 2021
    11.0T scanned at 2.78G/s, 1.12T issued at 290M/s, 20.1T total
    141G resilvered, 5.55% done, 0 days 19:07:43 to go
config:

    NAME                        STATE     READ WRITE CKSUM
    zfspool                     DEGRADED     0     0     0
      raidz3-0                  DEGRADED     0     0     0
        sda                     ONLINE       0     0     0
        sdb                     ONLINE       0     0     0
        sdc                     ONLINE       0     0     0
        sdd                     ONLINE       0     0     0
        sde                     ONLINE       0     0     0
        sdf                     ONLINE       0     0     0
        sdg                     ONLINE       0     0     0
        sdh                     ONLINE       0     0     0
        sdi                     ONLINE       0     0     0
        sdj                     ONLINE       0     0     0
        replacing-10            DEGRADED     0     0     0
          8426652289251556154   FAULTED      0     0     0  was /dev/sdk1  (awaiting resilver)
          sdl                   ONLINE       0     0     0  (resilvering)
        replacing-11            DEGRADED     0     0     0
          16973839097561849864  UNAVAIL      0     0     0  was /dev/sdm1
          sdk                   ONLINE       0     0     0  (resilvering)
devZer0 commented 3 years ago

one day later....

apparently, resilvering is being done one disk after the other per raidz vdev?

so i'm curious why sdk1 -> sdl is in status "resilvering" and "awaiting resilver" at the same time, isn't that misleading?

it also looks it's not consistent with that info near "scan:" and "action:" at the top.

so we do have three states during resilver for each disk, do we ?

is this correct?


  pool: zfspool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
    continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Fri Apr  9 10:49:36 2021
    10.4T scanned at 11.0G/s, 613G issued at 651M/s, 20.1T total
    25.0G resilvered, 2.97% done, 0 days 08:44:29 to go
config:

    NAME                        STATE     READ WRITE CKSUM
    zfspool                     DEGRADED     0     0     0
      raidz3-0                  DEGRADED     0     0     0
        sda                     ONLINE       0     0     0
        sdb                     ONLINE       0     0     0
        sdc                     ONLINE       0     0     0
        sdd                     ONLINE       0     0     0
        sde                     ONLINE       0     0     0
        sdf                     ONLINE       0     0     0
        sdg                     ONLINE       0     0     0
        sdh                     ONLINE       0     0     0
        sdi                     ONLINE       0     0     0
        sdj                     ONLINE       0     0     0
        sdl                     ONLINE       0     0     0
        replacing-11            DEGRADED     0     0     0
          16973839097561849864  UNAVAIL      0     0     0  was /dev/sdm1
          sdk                   ONLINE       0     0     0  (resilvering)

errors: No known data errors
stale[bot] commented 2 years ago

This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions.