Open kneutron opened 4 years ago
Bump - this appears to be a regression, ETA was not this bad in previous releases
This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions.
See also #11779
System information
Describe the problem you're observing
4x4TB disk mirror pool, attaching 2x6TB disks one at a time to mirror-0 to increase pool size with no degradation, with the aim of detaching the 2 original 4TB disks afterward to be repurposed
' zpool status; zpool iostat ' shows very inaccurate estimate as to % done and ETA ( resilver had been running for ~15 minutes with " echo 0 > /sys/module/zfs/parameters/zfs_resilver_delay " to speed up resilver I/O. Scrub immediately preceding this operation took:
scan: scrub repaired 0B in 0 days 08:52:42 with 0 errors on Sat Jun 20 22:01:42 2020
Disks are connected via SAS with breakout cables:
Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
Most datasets are recordsize=1M but the pool is not updated with all current features: (this pool can probably be upgraded since I don't intend on going back to any previous ZFS versions and it will stay on Linux)
zpool upgrade
Updated at 22:50 below:
Describe how to reproduce the problem
Attach a larger disk to a ZFS RAID10 mirror-0, one at a time and wait for resilver to finish The pool is currently:
Include any warning/errors/backtraces from the system logs (N/A)