openzfsonosx / zfs

OpenZFS on OS X
https://openzfsonosx.org/
Other
824 stars 72 forks source link

pool I/O is currently suspended #741

Open whizzrd opened 4 years ago

whizzrd commented 4 years ago

After #740 the pool imports but any operation results in zpool command hanging or reporting "pool I/O is currently suspended"

$ sudo zpool offline zpool media-6ACB9174-C22B-F943-95CB-C46AEE8C90A2
cannot offline media-6ACB9174-C22B-F943-95CB-C46AEE8C90A2: pool I/O is currently suspended
$ sudo zpool export -f zpool 
cannot export 'zpool': pool I/O is currently suspended
$ sudo zpool clear zpool media-6ACB9174-C22B-F943-95CB-C46AEE8C90A2
<hangs>
$ sudo zpool import -o failmode=continue zpool 
<still results in "pool I/O is currently suspended">

issue does not seem to be related to #104 or #277 by cause but symptoms match

whizzrd commented 4 years ago

well.. guess I'm glad i was just testing ZFS when it paniced and trashed the pool, or the pool crashed and paniced zfs, had to force shutdown of mac several times as it's completely hung in the shutdown process when zpool I/O is suspended, decided to dd the drive to file for analysis and diskutil zeroDisk it to try to prevent the hangs after automounting on boot ... since the drive is referenced by the pool by invariant uuid it does not seem to be a problem with device renumbering, some guidance would be appreciated on what to do in case zfs encounters this issue, all my attempts (zpool import -o failmode=continue, zpool import -FN , deleting zpool cache) fail

whizzrd commented 4 years ago

Issue is reproducible using an image of the failed drive

$ sudo hdiutil attach -nomount zpool.dmg
$ sudo zpool import zpool -d /dev/ 
 $ sudo zpool status -v
  pool: zpool
 state: ONLINE
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
   see: http://zfsonlinux.org/msg/ZFS-8000-JQ
  scan: scrub repaired 32K in 0 days 01:20:07 with 0 errors on Tue Nov 26 21:44:20 2019
config:

    NAME        STATE     READ WRITE CKSUM
    zpool       ONLINE       0     0     4
      disk11    ONLINE       0     0     4

errors: List of errors unavailable (insufficient privileges)
$ sudo zpool clear zpool
# hangs

possibly relevant snippets of spindump when sudo zpool clear zpool hangs

Process:         zconfigd [56]
Path:            /usr/local/bin/zconfigd
Architecture:    x86_64
Parent:          launchd [1]
UID:             0
Task size:       4660 KB
Note:            1 idle work queue thread omitted

  Thread 0x2a7              DispatchQueue 1           999 samples (1-999)       priority 20 (base 20)
  999  start + 1 (libdyld.dylib + 4117) [0x7fff67112015]
    999  main + 341 (zconfigd + 3234) [0x10bfa4ca2]
      999  CFRunLoopRun + 99 (CoreFoundation + 785875) [0x7fff3f174dd3]
        999  CFRunLoopRunSpecific + 487 (CoreFoundation + 530727) [0x7fff3f136927]
          999  __CFRunLoopRun + 1783 (CoreFoundation + 533175) [0x7fff3f1372b7]
            999  __CFRunLoopServiceMachPort + 341 (CoreFoundation + 536421) [0x7fff3f137f65]
              999  mach_msg_trap + 10 (libsystem_kernel.dylib + 78346) [0x7fff6725920a]
               *999  ipc_mqueue_receive_continue + 0 (kernel + 346432) [0xffffff8000254940]
Process:         zpool [3033]
Path:            /usr/local/bin/zpool
Architecture:    x86_64
Parent:          sudo [3032]
Responsible:     Terminal [481]
UID:             0
Task size:       5200 KB

  Thread 0x40e43            DispatchQueue 140735869343320                       999 samples (1-999)       priority 31 (base 31)
  999  start + 1 (libdyld.dylib + 4117) [0x7fff67112015]
    999  main + 514 (zpool + 23204) [0x10f0c1aa4]
      999  zpool_do_clear + 347 (zpool + 35665) [0x10f0c4b51]
        999  zpool_clear + 468 (libzfs.2.dylib + 113684) [0x10f545c14]
          999  zfs_ioctl + 55 (libzfs.2.dylib + 167147) [0x10f552ceb]
            999  __ioctl + 10 (libsystem_kernel.dylib + 115706) [0x7fff672623fa]
             *999  hndl_unix_scall64 + 22 (kernel + 132678) [0xffffff8000220646]
               *999  unix_syscall64 + 616 (kernel + 6308952) [0xffffff8000804458]
                 *999  ioctl + 1324 (kernel + 5537436) [0xffffff8000747e9c]
                   *999  fo_ioctl + 67 (kernel + 5207075) [0xffffff80006f7423]
                     *999  ??? (kernel + 2888555) [0xffffff80004c136b]
                       *999  VNOP_IOCTL + 191 (kernel + 2943647) [0xffffff80004cea9f]
                         *999  spec_ioctl + 113 (kernel + 2986641) [0xffffff80004d9291]
                           *999  zfsdev_ioctl + 1686 (zfs + 748839) [0xffffff7f81d2dd27]
                             *999  zfs_ioc_clear + 373 (zfs + 733358) [0xffffff7f81d2a0ae]
                               *999  spa_vdev_state_exit + 192 (zfs + 472376) [0xffffff7f81cea538]
                                 *999  txg_wait_synced + 212 (zfs + 493799) [0xffffff7f81cef8e7]
                                   *999  spl_cv_wait + 50 (spl + 6423) [0xffffff7f80a83917]
                                     *999  msleep + 98 (kernel + 5436882) [0xffffff800072f5d2]
                                       *999  ??? (kernel + 5435654) [0xffffff800072f106]
                                         *999  lck_mtx_sleep + 126 (kernel + 512366) [0xffffff800027d16e]
                                           *999  thread_block_reason + 175 (kernel + 558223) [0xffffff800028848f]
                                             *999  ??? (kernel + 562378) [0xffffff80002894ca]
                                               *999  machine_switch_context + 205 (kernel + 1593053) [0xffffff8000384edd]
Process:         zed [65]
Path:            /usr/local/bin/zed
Architecture:    x86_64
Parent:          launchd [1]
UID:             0
Task size:       5052 KB

  Thread 0x2b0              DispatchQueue 140735869343320                       999 samples (1-999)       priority 20 (base 20)
  999  start + 1 (libdyld.dylib + 4117) [0x7fff67112015]
    999  main + 1170 (zed + 6630) [0x104ba69e6]
      999  zed_event_service + 75 (zed + 12817) [0x104ba8211]
        999  zpool_events_next + 180 (libzfs.2.dylib + 117049) [0x104fd9939]
          999  zfs_ioctl + 55 (libzfs.2.dylib + 167147) [0x104fe5ceb]
            999  __ioctl + 10 (libsystem_kernel.dylib + 115706) [0x7fff672623fa]
             *999  hndl_unix_scall64 + 22 (kernel + 132678) [0xffffff8000220646]
               *999  unix_syscall64 + 616 (kernel + 6308952) [0xffffff8000804458]
                 *999  ioctl + 1324 (kernel + 5537436) [0xffffff8000747e9c]
                   *999  fo_ioctl + 67 (kernel + 5207075) [0xffffff80006f7423]
                     *999  ??? (kernel + 2888555) [0xffffff80004c136b]
                       *999  VNOP_IOCTL + 191 (kernel + 2943647) [0xffffff80004cea9f]
                         *999  spec_ioctl + 113 (kernel + 2986641) [0xffffff80004d9291]
                           *999  zfsdev_ioctl + 1686 (zfs + 748839) [0xffffff7f81d2dd27]
                             *999  zfs_ioc_events_next + 163 (zfs + 740896) [0xffffff7f81d2be20]
                               *999  zfs_zevent_wait + 69 (zfs + 320840) [0xffffff7f81cc5548]
                                 *999  spl_cv_wait + 50 (spl + 6423) [0xffffff7f80a83917]
                                   *999  msleep + 98 (kernel + 5436882) [0xffffff800072f5d2]
                                     *999  ??? (kernel + 5435654) [0xffffff800072f106]
                                       *999  lck_mtx_sleep + 126 (kernel + 512366) [0xffffff800027d16e]
                                         *999  thread_block_reason + 175 (kernel + 558223) [0xffffff800028848f]
                                           *999  ??? (kernel + 562378) [0xffffff80002894ca]
                                             *999  machine_switch_context + 205 (kernel + 1593053) [0xffffff8000384edd]
Process:         mount_hfs [2880]
Path:            /System/Library/Filesystems/hfs.fs/Contents/Resources/mount_hfs
Architecture:    x86_64
Parent:          mount [2879]
Responsible:     diskarbitrationd [80]
UID:             0
Task size:       1396 KB

  Thread 0x3ec93            DispatchQueue 140735869343320                       999 samples (1-999)       priority 95 (base 31)
  999  start + 1 (libdyld.dylib + 4117) [0x7fff67112015]
    999  mount + 10 (libsystem_kernel.dylib + 122410) [0x7fff67263e2a]
     *999  hndl_unix_scall64 + 22 (kernel + 132678) [0xffffff8000220646]
       *999  unix_syscall64 + 616 (kernel + 6308952) [0xffffff8000804458]
         *999  mount + 54 (kernel + 2789414) [0xffffff80004a9026]
           *999  __mac_mount + 1075 (kernel + 2790499) [0xffffff80004a9463]
             *999  ??? (kernel + 2785451) [0xffffff80004a80ab]
               *999  hfs_mount + 170 (HFS + 206998) [0xffffff7f83c73896]
                 *999  hfs_mountfs + 2919 (HFS + 238124) [0xffffff7f83c7b22c]
                   *999  hfs_early_journal_init + 718 (HFS + 312626) [0xffffff7f83c8d532]
                     *999  journal_open + 5913 (HFS + 88027) [0xffffff7f83c567db]
                       *999  VNOP_BWRITE + 66 (kernel + 2955634) [0xffffff80004d1972]
                         *999  buf_bwrite + 164 (kernel + 2646660) [0xffffff8000486284]
                           *999  spec_strategy + 916 (kernel + 2988004) [0xffffff80004d97e4]
                             *999  BC_strategy + 1198 (BootCache + 16654) [0xffffff7f83db610e]
                               *999  dkreadwrite(void*, dkrtype_t) + 1523 (IOStorageFamily + 55094) [0xffffff7f80a5e736]
                                 *999  CoreStorageGroup::write(IOService*, unsigned long long, IOMemoryDescriptor*, IOStorageAttributes*, IOStorageCompletion*) + 303 (CoreStorage + 305011) [0xffffff7f83d05773]
                                   *999  CoreStorageGroup::ioreq(IOService*, unsigned long long, IOMemoryDescriptor*, IOStorageAttributes*, IOStorageCompletion*, rl_node**) + 123 (CoreStorage + 305667) [0xffffff7f83d05a03]
                                     *999  CoreStorageIORequest::write() + 341 (CoreStorage + 340079) [0xffffff7f83d0e06f]
                                       *999  CoreStoragePhysical::write(IOService*, unsigned long long, IOMemoryDescriptor*, IOStorageAttributes*, IOStorageCompletion*) + 227 (CoreStorage + 365967) [0xffffff7f83d1458f]
                                         *999  IOBlockStorageDriver::prepareRequest(unsigned long long, IOMemoryDescriptor*, IOStorageAttributes*, IOStorageCompletion*) + 298 (IOStorageFamily + 30478) [0xffffff7f80a5870e]
                                           *999  IOBlockStorageDriver::executeRequest(unsigned long long, IOMemoryDescriptor*, IOStorageAttributes*, IOStorageCompletion*, IOBlockStorageDriver::Context*) + 294 (IOStorageFamily + 20016) [0xffffff7f80a55e30]
                                             *999  net_lundman_zfs_zvol_device::doAsyncReadWrite(IOMemoryDescriptor*, unsigned long long, unsigned long long, IOStorageAttributes*, IOStorageCompletion*) + 338 (zfs + 952894) [0xffffff7f81d5fa3e]
                                               *999  zvol_write_iokit + 359 (zfs + 945723) [0xffffff7f81d5de3b]
                                                 *999  dmu_write_iokit_dbuf + 239 (zfs + 99258) [0xffffff7f81c8f3ba]
                                                   *999  dmu_buf_will_dirty_impl + 138 (zfs + 68423) [0xffffff7f81c87b47]
                                                     *999  dbuf_read + 1698 (zfs + 63548) [0xffffff7f81c8683c]
                                                       *999  zio_wait + 609 (zfs + 888998) [0xffffff7f81d500a6]
                                                         *999  spl_cv_wait + 50 (spl + 6423) [0xffffff7f80a83917]
                                                           *999  msleep + 98 (kernel + 5436882) [0xffffff800072f5d2]
                                                             *999  ??? (kernel + 5435654) [0xffffff800072f106]
                                                               *999  lck_mtx_sleep + 126 (kernel + 512366) [0xffffff800027d16e]
                                                                 *999  thread_block_reason + 175 (kernel + 558223) [0xffffff800028848f]
                                                                   *999  ??? (kernel + 562378) [0xffffff80002894ca]
                                                                     *999  machine_switch_context + 205 (kernel + 1593053) [0xffffff8000384edd]
lundman commented 4 years ago

zpool clear is stuck in txg_sync and if txg_sync is stuck, then everything will basically stop. What are the two txg_sync threads doing?

whizzrd commented 4 years ago
Process:         kernel_task [0]
Path:            /System/Library/Kernels/kernel
Architecture:    x86_64
UID:             0
Version:         Darwin Kernel Version 17.7.0: Fri Oct  4 23:08:59 PDT 2019; root:xnu-4570.71.57~1/RELEASE_X86_64
Task size:       2316.14 MB (+2936 KB)
CPU Time:        2.028s (4.1G cycles, 1366.6M instructions, 3.03c/i)

...
  Thread 0x180              999 samples (1-999)       priority 81 (base 81)     cpu time <0.001s (814.5K cycles, 452.5K instructions, 1.80c/i)
 *999  call_continuation + 23 (kernel + 128343) [0xffffff800021f557]
   *999  _cs_thread_create_int(thread_wrapper*) + 29 (CoreStorage + 527400) [0xffffff7f83d3bc28]
     *560  txg_sync_thread(void*) + 194 (CoreStorage + 244690) [0xffffff7f83cf6bd2]
       *560  _cv_wait_for_nanosec + 128 (CoreStorage + 526906) [0xffffff7f83d3ba3a]
         *560  msleep + 98 (kernel + 5436882) [0xffffff800072f5d2]
           *560  ??? (kernel + 5435416) [0xffffff800072f018]
             *560  lck_mtx_sleep_deadline + 139 (kernel + 512939) [0xffffff800027d3ab]
               *560  thread_block_reason + 175 (kernel + 558223) [0xffffff800028848f]
                 *560  ??? (kernel + 562378) [0xffffff80002894ca]
                   *560  machine_switch_context + 205 (kernel + 1593053) [0xffffff8000384edd]
     *439  txg_sync_thread(void*) + 242 (CoreStorage + 244738) [0xffffff7f83cf6c02]
       *439  _cv_wait + 62 (CoreStorage + 526762) [0xffffff7f83d3b9aa]
         *439  msleep + 98 (kernel + 5436882) [0xffffff800072f5d2]
           *439  ??? (kernel + 5435654) [0xffffff800072f106]
             *439  lck_mtx_sleep + 126 (kernel + 512366) [0xffffff800027d16e]
               *439  thread_block_reason + 175 (kernel + 558223) [0xffffff800028848f]
                 *439  ??? (kernel + 562378) [0xffffff80002894ca]
                   *439  machine_switch_context + 205 (kernel + 1593053) [0xffffff8000384edd]

...
  Thread 0x238              999 samples (1-999)       priority 81 (base 81)
 *999  call_continuation + 23 (kernel + 128343) [0xffffff800021f557]
   *999  _cs_thread_create_int(thread_wrapper*) + 29 (CoreStorage + 527400) [0xffffff7f83d3bc28]
     *999  txg_sync_thread(void*) + 242 (CoreStorage + 244738) [0xffffff7f83cf6c02]
       *999  _cv_wait + 62 (CoreStorage + 526762) [0xffffff7f83d3b9aa]
         *999  msleep + 98 (kernel + 5436882) [0xffffff800072f5d2]
           *999  ??? (kernel + 5435654) [0xffffff800072f106]
             *999  lck_mtx_sleep + 126 (kernel + 512366) [0xffffff800027d16e]
               *999  thread_block_reason + 175 (kernel + 558223) [0xffffff800028848f]
                 *999  ??? (kernel + 562378) [0xffffff80002894ca]
                   *999  machine_switch_context + 205 (kernel + 1593053) [0xffffff8000384edd]

  Thread 0x3ea5b            999 samples (1-999)       priority 81 (base 81)
 *999  call_continuation + 23 (kernel + 128343) [0xffffff800021f557]
   *999  txg_sync_thread + 309 (zfs + 492359) [0xffffff7f81cef347]
     *999  spl_cv_wait + 50 (spl + 6423) [0xffffff7f80a83917]
       *999  msleep + 98 (kernel + 5436882) [0xffffff800072f5d2]
         *999  ??? (kernel + 5435654) [0xffffff800072f106]
           *999  lck_mtx_sleep + 126 (kernel + 512366) [0xffffff800027d16e]
             *999  thread_block_reason + 175 (kernel + 558223) [0xffffff800028848f]
               *999  ??? (kernel + 562378) [0xffffff80002894ca]
                 *999  machine_switch_context + 205 (kernel + 1593053) [0xffffff8000384edd]

...
  Thread 0x3ec80            999 samples (1-999)       priority 81 (base 81)
 *999  call_continuation + 23 (kernel + 128343) [0xffffff800021f557]
   *999  _cs_thread_create_int(thread_wrapper*) + 29 (CoreStorage + 527400) [0xffffff7f83d3bc28]
     *999  txg_sync_thread(void*) + 242 (CoreStorage + 244738) [0xffffff7f83cf6c02]
       *999  _cv_wait + 62 (CoreStorage + 526762) [0xffffff7f83d3b9aa]
         *999  msleep + 98 (kernel + 5436882) [0xffffff800072f5d2]
           *999  ??? (kernel + 5435654) [0xffffff800072f106]
             *999  lck_mtx_sleep + 126 (kernel + 512366) [0xffffff800027d16e]
               *999  thread_block_reason + 175 (kernel + 558223) [0xffffff800028848f]
                 *999  ??? (kernel + 562378) [0xffffff80002894ca]
                   *999  machine_switch_context + 205 (kernel + 1593053) [0xffffff8000384edd]
lundman commented 4 years ago

Looks like they are doing SFA, just sitting there. Most peculiar, don't think I've come across this before

whizzrd commented 4 years ago

Anything i can do to get more information? The disk image is about 500GB but perhaps there is a DTrace script you could provide that i could run while repro-ing or zdb commands to execute

beren12 commented 4 years ago

I just put zfs on my dad's external 8tb and he's been getting io suspended, seems to happen overnight when the machine is idle. I have zfs on a sdcard for my downloads on my MBP and it's been fine for months, however I recently put zfs on my external 4TB and overnight I noticed my machine was hung in the morning. Maybe the drive is sleeping and zfs gets confused. Both machines have Sleep Drives setting disabled.

lundman commented 4 years ago

You can run something like

sudo dtrace -qn 'zfs_dbgmsg_mac:entry{printf("%s\n", stringof(arg0));}'

before import and it'll print the internal buffers, which include txgs.

and possibly post the full spindump, need to see if it is hung on IO somewhere, since txgs are just waiting.

whizzrd commented 4 years ago
$ hdiutil attach -nomount ~/Desktop/zpool.dmg 
/dev/disk15             GUID_partition_scheme           
/dev/disk15s1           ZFS                             
/dev/disk15s9           6A945A3B-1DD2-11B2-99A6-0800207
 $ sudo zpool import zpool

 $ sudo zpool status
  pool: zpool
 state: ONLINE
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
   see: http://zfsonlinux.org/msg/ZFS-8000-JQ
  scan: scrub repaired 32K in 0 days 01:20:07 with 0 errors on Tue Nov 26 21:44:20 2019
config:

    NAME                                          STATE     READ WRITE CKSUM
    zpool                                         ONLINE       0     0     4
      media-6ACB9174-C22B-F943-95CB-C46AEE8C90A2  ONLINE       0     0     4

errors: 1 data errors, use '-v' for a list
joris@MacMini ~ $ sudo zpool scrub zpool
cannot scrub zpool: pool I/O is currently suspended
$ sudo dtrace -qn 'zfs_dbgmsg_mac:entry{printf("%s\n", stringof(arg0));}'
spa_tryimport: importing zpool
spa_load($import, config trusted): LOADING
disk vdev '/private/var/run/disk/by-id/media-6ACB9174-C22B-F943-95CB-C46AEE8C90A2': best uberblock found for spa $import. txg 151912
spa_load($import, config untrusted): using uberblock with txg=151912
vdev_copy_path: vdev 3431280965388029399: path changed from '/dev/disk11s1' to '/private/var/run/disk/by-id/media-6ACB9174-C22B-F943-95CB-C46AEE8C90A2'
spa_load($import, config trusted): LOADED
spa_load($import, config trusted): UNLOADING
spa_tryimport: importing zpool
spa_load($import, config trusted): LOADING
disk vdev '/private/var/run/disk/by-id/media-6ACB9174-C22B-F943-95CB-C46AEE8C90A2': best uberblock found for spa $import. txg 151912
spa_load($import, config untrusted): using uberblock with txg=151912
vdev_copy_path: vdev 3431280965388029399: path changed from '/dev/disk11s1' to '/private/var/run/disk/by-id/media-6ACB9174-C22B-F943-95CB-C46AEE8C90A2'
spa_load($import, config trusted): LOADED
spa_load($import, config trusted): UNLOADING
spa_import: importing zpool
spa_load(zpool, config trusted): LOADING
disk vdev '/private/var/run/disk/by-id/media-6ACB9174-C22B-F943-95CB-C46AEE8C90A2': best uberblock found for spa zpool. txg 151912
spa_load(zpool, config untrusted): using uberblock with txg=151912
vdev_copy_path: vdev 3431280965388029399: path changed from '/dev/disk11s1' to '/private/var/run/disk/by-id/media-6ACB9174-C22B-F943-95CB-C46AEE8C90A2'
spa=zpool async request task=1
spa_load(zpool, config trusted): LOADED
txg 151914 open pool version 5000; software version 5000/18446743522517281469; uts  17.7.0 Darwin Kernel Version 17.7.0: Fri Oct  4 23:08:59 PDT 2019; root:xnu-4570.71.57~1/RELEASE_X86_64 
spa=zpool async request task=32
txg 151917 import pool version 5000; software version 5000/18446743522517281469; uts  17.7.0 Darwin Kernel Version 17.7.0: Fri Oct  4 23:08:59 PDT 2019; root:xnu-4570.71.57~1/RELEASE_X86_64 
spa_tryimport: importing trinity
spa_load($import, config trusted): LOADING
disk vdev '/private/var/run/disk/by-path/PCI0@0-XHC1@14-@2:0': best uberblock found for spa $import. txg 14
spa_load($import, config untrusted): using uberblock with txg=14
vdev_copy_path: vdev 14629286330009026479: path changed from '/dev/disk4' to '/private/var/run/disk/by-path/PCI0@0-XHC1@14-@2:0'
vdev_copy_path: vdev 13993740478384210423: path changed from '/dev/disk6' to '/private/var/run/disk/by-path/PCI0@0-XHC1@14-@2,1:0'
vdev_copy_path: vdev 1866547862952595128: path changed from '/dev/disk7' to '/private/var/run/disk/by-path/PCI0@0-XHC1@14-@2,2:0'
spa_load($import, config trusted): LOADED
spa_load($import, config trusted): UNLOADING
spa_tryimport: importing trinity
spa_load($import, config trusted): LOADING
disk vdev '/private/var/run/disk/by-path/PCI0@0-XHC1@14-@2:0': best uberblock found for spa $import. txg 14
spa_load($import, config untrusted): using uberblock with txg=14
vdev_copy_path: vdev 14629286330009026479: path changed from '/dev/disk4' to '/private/var/run/disk/by-path/PCI0@0-XHC1@14-@2:0'
vdev_copy_path: vdev 13993740478384210423: path changed from '/dev/disk6' to '/private/var/run/disk/by-path/PCI0@0-XHC1@14-@2,1:0'
vdev_copy_path: vdev 1866547862952595128: path changed from '/dev/disk7' to '/private/var/run/disk/by-path/PCI0@0-XHC1@14-@2,2:0'
spa_load($import, config trusted): LOADED
spa_load($import, config trusted): UNLOADING
spa_tryimport: importing trinity
spa_load($import, config trusted): LOADING
disk vdev '/private/var/run/disk/by-path/PCI0@0-XHC1@14-@2:0': best uberblock found for spa $import. txg 14
spa_load($import, config untrusted): using uberblock with txg=14
vdev_copy_path: vdev 14629286330009026479: path changed from '/dev/disk4' to '/private/var/run/disk/by-path/PCI0@0-XHC1@14-@2:0'
vdev_copy_path: vdev 13993740478384210423: path changed from '/dev/disk6' to '/private/var/run/disk/by-path/PCI0@0-XHC1@14-@2,1:0'
vdev_copy_path: vdev 1866547862952595128: path changed from '/dev/disk7' to '/private/var/run/disk/by-path/PCI0@0-XHC1@14-@2,2:0'
spa_load($import, config trusted): LOADED
spa_load($import, config trusted): UNLOADING
spa_tryimport: importing trinity
spa_load($import, config trusted): LOADING
disk vdev '/private/var/run/disk/by-path/PCI0@0-XHC1@14-@2:0': best uberblock found for spa $import. txg 14
spa_load($import, config untrusted): using uberblock with txg=14
vdev_copy_path: vdev 14629286330009026479: path changed from '/dev/disk4' to '/private/var/run/disk/by-path/PCI0@0-XHC1@14-@2:0'
vdev_copy_path: vdev 13993740478384210423: path changed from '/dev/disk6' to '/private/var/run/disk/by-path/PCI0@0-XHC1@14-@2,1:0'
vdev_copy_path: vdev 1866547862952595128: path changed from '/dev/disk7' to '/private/var/run/disk/by-path/PCI0@0-XHC1@14-@2,2:0'
spa_load($import, config trusted): LOADED
spa_load($import, config trusted): UNLOADING
spa_tryimport: importing trinity
spa_load($import, config trusted): LOADING
disk vdev '/private/var/run/disk/by-path/PCI0@0-XHC1@14-@2:0': best uberblock found for spa $import. txg 14
spa_load($import, config untrusted): using uberblock with txg=14
vdev_copy_path: vdev 14629286330009026479: path changed from '/dev/disk4' to '/private/var/run/disk/by-path/PCI0@0-XHC1@14-@2:0'
vdev_copy_path: vdev 13993740478384210423: path changed from '/dev/disk6' to '/private/var/run/disk/by-path/PCI0@0-XHC1@14-@2,1:0'
vdev_copy_path: vdev 1866547862952595128: path changed from '/dev/disk7' to '/private/var/run/disk/by-path/PCI0@0-XHC1@14-@2,2:0'
spa_load($import, config trusted): LOADED
spa_load($import, config trusted): UNLOADING
spa_tryimport: importing trinity
spa_load($import, config trusted): LOADING
disk vdev '/private/var/run/disk/by-path/PCI0@0-XHC1@14-@2:0': best uberblock found for spa $import. txg 14
spa_load($import, config untrusted): using uberblock with txg=14
vdev_copy_path: vdev 14629286330009026479: path changed from '/dev/disk4' to '/private/var/run/disk/by-path/PCI0@0-XHC1@14-@2:0'
vdev_copy_path: vdev 13993740478384210423: path changed from '/dev/disk6' to '/private/var/run/disk/by-path/PCI0@0-XHC1@14-@2,1:0'
vdev_copy_path: vdev 1866547862952595128: path changed from '/dev/disk7' to '/private/var/run/disk/by-path/PCI0@0-XHC1@14-@2,2:0'
spa_load($import, config trusted): LOADED
spa_load($import, config trusted): UNLOADING
lundman commented 4 years ago

That wasn't very useful - I wonder then, it seems that hfs_mount is locked, as well as zpool clear. I wonder if we can stop hfs from mounting it. Alas, zpool import -N will still create zvol nodes, and it'll probe hfs and mount them. Could be you can set snapdev to invisible before import -N. Then check you have no stuck hfs_mount (or other processes) before doing anything else.