openzfs / zfs

OpenZFS on Linux and FreeBSD
https://openzfs.github.io/openzfs-docs
Other
10.4k stars 1.73k forks source link

ZFS lockup, data not written #4486

Closed wipash closed 5 years ago

wipash commented 8 years ago

We recently experienced an issue where ZFS locked up, and files that were subsequently written to a Samba share were lost on reboot. It seemed like Samba acknowledged the write, but ZFS hadn't actually written anything to disk.

Samba audit logging showed the files being written, but after a reboot to restore ZFS functionality the logged files were no longer present.

System info:

[root@chg-sto-102 ~]# cat /etc/centos-release
CentOS Linux release 7.2.1511 (Core)

[root@chg-sto-102 ~]# modinfo zfs
filename:       /lib/modules/3.10.0-327.4.5.el7.x86_64/weak-updates/zfs.ko
version:        0.6.5-101_gab5cbbd

[root@chg-sto-102 ~]# modinfo spl
filename:       /lib/modules/3.10.0-327.4.5.el7.x86_64/weak-updates/spl.ko
version:        0.6.5-33_g466bcf3

[root@chg-sto-102 ~]# uname -sr
Linux 3.10.0-327.4.5.el7.x86_64

Pool info:

  pool: chg_live
 state: ONLINE
  scan: scrub in progress since Thu Mar 10 13:16:40 2016
        21.4T scanned out of 31.9T at 292M/s, 10h25m to go
        0 repaired, 67.18% done
config:

        NAME                                            STATE     READ WRITE CKSUM
        chg_live                                        ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            ata-WDC_WD40EFRX-68WT0N0_WD-                ONLINE       0     0     0
            ata-WDC_WD40EFRX-68WT0N0_WD-                ONLINE       0     0     0
            ata-WDC_WD40EFRX-68WT0N0_WD-                ONLINE       0     0     0
            ata-WDC_WD40EFRX-68WT0N0_WD-                ONLINE       0     0     0
            ata-ST4000DM000-1F2168_                     ONLINE       0     0     0
          raidz2-1                                      ONLINE       0     0     0
            ata-ST4000DM000-1F2168_                     ONLINE       0     0     0
            ata-ST4000DM000-1F2168_                     ONLINE       0     0     0
            ata-ST4000DM000-1F2168_                     ONLINE       0     0     0
            ata-ST4000DM000-1F2168_                     ONLINE       0     0     0
            ata-ST4000DM000-1F2168_                     ONLINE       0     0     0
          raidz2-2                                      ONLINE       0     0     0
            ata-WDC_WD3000FYYZ-01UL1B0_WD-              ONLINE       0     0     0
            ata-WDC_WD3000FYYZ-01UL1B0_WD-              ONLINE       0     0     0
            ata-WDC_WD3000FYYZ-01UL1B0_WD-              ONLINE       0     0     0
            ata-WDC_WD3000FYYZ-01UL1B0_WD-              ONLINE       0     0     0
            ata-WDC_WD3000FYYZ-01UL1B0_WD-              ONLINE       0     0     0

errors: No known data errors

Kernel trace:

Mar 11 09:33:26 chg-sto-102 kernel: INFO: task zfs:17304 blocked for more than 120 seconds.
Mar 11 09:33:26 chg-sto-102 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Mar 11 09:33:26 chg-sto-102 kernel: zfs             D 0000000000000000     0 17304  31718 0x00000080
Mar 11 09:33:26 chg-sto-102 kernel: ffff88048dc3fac8 0000000000000086 ffff8803dd85b980 ffff88048dc3ffd8
Mar 11 09:33:26 chg-sto-102 kernel: ffff88048dc3ffd8 ffff88048dc3ffd8 ffff8803dd85b980 ffff88048dc3fc98
Mar 11 09:33:26 chg-sto-102 kernel: ffff88048dc3fc18 ffff88048dc3fca0 ffff88048dc3fc40 0000000000000000
Mar 11 09:33:26 chg-sto-102 kernel: Call Trace:
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffff8163a879>] schedule+0x29/0x70
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa0011f05>] cv_wait_common+0x125/0x150 [spl]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffff810a6ae0>] ? wake_up_atomic_t+0x30/0x30
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa0011f45>] __cv_wait+0x15/0x20 [spl]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa00ca9ab>] bqueue_dequeue+0x4b/0xc0 [zfs]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa00d86cc>] dmu_send_impl+0x52c/0xfe0 [zfs]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa00da41a>] dmu_send_obj+0x17a/0x1f0 [zfs]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa01486dd>] zfs_ioc_send+0xed/0x2d0 [zfs]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa014cd5d>] zfsdev_ioctl+0x4bd/0x500 [zfs]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffff811f1e75>] do_vfs_ioctl+0x2e5/0x4c0
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffff81640dbd>] ? __do_page_fault+0x16d/0x450
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffff811f20f1>] SyS_ioctl+0xa1/0xc0
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffff816458c9>] system_call_fastpath+0x16/0x1b
Mar 11 09:33:26 chg-sto-102 kernel: INFO: task send_traverse:17313 blocked for more than 120 seconds.
Mar 11 09:33:26 chg-sto-102 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Mar 11 09:33:26 chg-sto-102 kernel: send_traverse   D ffffffffa022acd8     0 17313      2 0x00000080
Mar 11 09:33:26 chg-sto-102 kernel: ffff880733b47720 0000000000000046 ffff88032d8af300 ffff880733b47fd8
Mar 11 09:33:26 chg-sto-102 kernel: ffff880733b47fd8 ffff880733b47fd8 ffff88032d8af300 ffffffffa022acd0
Mar 11 09:33:26 chg-sto-102 kernel: ffffffffa022acd4 ffff88032d8af300 00000000ffffffff ffffffffa022acd8
Mar 11 09:33:26 chg-sto-102 kernel: Call Trace:
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffff8163b959>] schedule_preempt_disabled+0x29/0x70
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffff81639655>] __mutex_lock_slowpath+0xc5/0x1c0
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffff81638abf>] mutex_lock+0x1f/0x2f
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa0011e96>] cv_wait_common+0xb6/0x150 [spl]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffff810a6ae0>] ? wake_up_atomic_t+0x30/0x30
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa0011f45>] __cv_wait+0x15/0x20 [spl]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa00be4b2>] arc_read+0xe2/0xa90 [zfs]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa000b1b0>] ? spl_kmem_zalloc+0xc0/0x170 [spl]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa00be360>] ? arc_buf_remove_ref+0x150/0x150 [zfs]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa00dc327>] traverse_visitbp+0x687/0x810 [zfs]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa00dbfc2>] traverse_visitbp+0x322/0x810 [zfs]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa00dbfc2>] traverse_visitbp+0x322/0x810 [zfs]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa00dbfc2>] traverse_visitbp+0x322/0x810 [zfs]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa00dbfc2>] traverse_visitbp+0x322/0x810 [zfs]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa00dbfc2>] traverse_visitbp+0x322/0x810 [zfs]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa00dbfc2>] traverse_visitbp+0x322/0x810 [zfs]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa00dccf1>] traverse_dnode+0x91/0x1c0 [zfs]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa00dc29f>] traverse_visitbp+0x5ff/0x810 [zfs]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa00dc650>] traverse_impl+0x1a0/0x420 [zfs]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa00d9180>] ? dmu_send_impl+0xfe0/0xfe0 [zfs]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa00d6420>] ? byteswap_record+0x2d0/0x2d0 [zfs]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa00dc925>] traverse_dataset+0x55/0x60 [zfs]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa00d9180>] ? dmu_send_impl+0xfe0/0xfe0 [zfs]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa00d6455>] send_traverse_thread+0x35/0x80 [zfs]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa000cec1>] thread_generic_wrapper+0x71/0x80 [spl]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffffa000ce50>] ? __thread_exit+0x20/0x20 [spl]
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffff810a5aef>] kthread+0xcf/0xe0
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffff81645818>] ret_from_fork+0x58/0x90
Mar 11 09:33:26 chg-sto-102 kernel: [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
Mar 11 09:35:26 chg-sto-102 kernel: INFO: task txg_sync:2246 blocked for more than 120 seconds.
Mar 11 09:35:26 chg-sto-102 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Mar 11 09:35:26 chg-sto-102 kernel: txg_sync        D ffffffffa022acd8     0  2246      2 0x00000000
Mar 11 09:35:26 chg-sto-102 kernel: ffff8807ef497300 0000000000000046 ffff8807f0cce780 ffff8807ef497fd8
Mar 11 09:35:26 chg-sto-102 kernel: ffff8807ef497fd8 ffff8807ef497fd8 ffff8807f0cce780 ffffffffa022acd0
Mar 11 09:35:26 chg-sto-102 kernel: ffffffffa022acd4 ffff8807f0cce780 00000000ffffffff ffffffffa022acd8
Mar 11 09:35:26 chg-sto-102 kernel: Call Trace:
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8163b959>] schedule_preempt_disabled+0x29/0x70
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff81639655>] __mutex_lock_slowpath+0xc5/0x1c0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff81638abf>] mutex_lock+0x1f/0x2f
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00b86b0>] buf_hash_find+0xa0/0x150 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00be4fc>] arc_read+0x12c/0xa90 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00fbaba>] dsl_scan_prefetch.isra.5+0xda/0xf0 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00fc296>] dsl_scan_visitbp+0x306/0xc50 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00fc349>] dsl_scan_visitbp+0x3b9/0xc50 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00fc349>] dsl_scan_visitbp+0x3b9/0xc50 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00fc349>] dsl_scan_visitbp+0x3b9/0xc50 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00fc349>] dsl_scan_visitbp+0x3b9/0xc50 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00fc349>] dsl_scan_visitbp+0x3b9/0xc50 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00fc853>] dsl_scan_visitbp+0x8c3/0xc50 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00e1fa5>] ? dnode_setdirty+0x145/0x190 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00c7c10>] ? dbuf_dirty+0x460/0x8c0 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00fccc2>] dsl_scan_visitds+0xe2/0x4e0 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00fe268>] dsl_scan_sync+0x2d8/0xc90 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000c4fd>] ? spl_kmem_cache_free+0x14d/0x1d0 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa016b0b7>] ? zio_destroy+0x77/0x80 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa01124b5>] spa_sync+0x3e5/0xb70 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff810a6b0b>] ? autoremove_wake_function+0x2b/0x40
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa0124b2c>] txg_sync_thread+0x3cc/0x640 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa0124760>] ? txg_fini+0x2a0/0x2a0 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000cec1>] thread_generic_wrapper+0x71/0x80 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000ce50>] ? __thread_exit+0x20/0x20 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff810a5aef>] kthread+0xcf/0xe0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff81645818>] ret_from_fork+0x58/0x90
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
Mar 11 09:35:26 chg-sto-102 kernel: INFO: task smbd:2604 blocked for more than 120 seconds.
Mar 11 09:35:26 chg-sto-102 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Mar 11 09:35:26 chg-sto-102 kernel: smbd            D ffff8806bc2444a0     0  2604  10720 0x00000080
Mar 11 09:35:26 chg-sto-102 kernel: ffff88042199f8c8 0000000000000086 ffff8807ea75c500 ffff88042199ffd8
Mar 11 09:35:26 chg-sto-102 kernel: ffff88042199ffd8 ffff88042199ffd8 ffff8807ea75c500 ffff8807ea75c500
Mar 11 09:35:26 chg-sto-102 kernel: ffff8807f22e6518 ffffffffffffffff ffff8807f22e6520 ffff8806bc2444a0
Mar 11 09:35:26 chg-sto-102 kernel: Call Trace:
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8163a879>] schedule+0x29/0x70
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8163c235>] rwsem_down_read_failed+0xf5/0x170
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff81301764>] call_rwsem_down_read_failed+0x14/0x30
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000adba>] ? spl_kmem_free+0x2a/0x40 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff81639a90>] ? down_read+0x20/0x30
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00dfaf4>] dmu_zfetch+0xf4/0xc80 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff810a97e6>] ? __remove_hrtimer+0x46/0xe0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00c5a0c>] dbuf_read+0x78c/0x880 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000adba>] ? spl_kmem_free+0x2a/0x40 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00c720b>] ? dbuf_hold_impl+0x8b/0xa0 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00e17d1>] dnode_hold_impl+0xe1/0x590 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8151a196>] ? skb_release_data+0xd6/0x110
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811c332c>] ? __kmalloc_node+0x5c/0x280
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811c340d>] ? __kmalloc_node+0x13d/0x280
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000b047>] ? spl_kmem_alloc+0xc7/0x170 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000b1b0>] ? spl_kmem_zalloc+0xc0/0x170 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00e1c99>] dnode_hold+0x19/0x20 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00ddaa2>] dmu_tx_hold_object_impl.isra.5+0x42/0x150 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00ddbc5>] dmu_tx_hold_bonus+0x15/0x30 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00de98c>] dmu_tx_hold_sa+0x3c/0x180 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa015ea3d>] zfs_write+0x54d/0xc50 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff815a109b>] ? inet_recvmsg+0x7b/0xa0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8123088b>] ? locks_alloc_lock+0x1b/0x60
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa01746a5>] zpl_write_common_iovec.constprop.8+0x95/0x100 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa017481e>] zpl_write+0x7e/0xb0 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811de54d>] vfs_write+0xbd/0x1e0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811df1a2>] SyS_pwrite64+0x92/0xc0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff816458c9>] system_call_fastpath+0x16/0x1b
Mar 11 09:35:26 chg-sto-102 kernel: INFO: task smbd:7953 blocked for more than 120 seconds.
Mar 11 09:35:26 chg-sto-102 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Mar 11 09:35:26 chg-sto-102 kernel: smbd            D ffff88062bb28940     0  7953  10720 0x00000080
Mar 11 09:35:26 chg-sto-102 kernel: ffff88057d9bf598 0000000000000082 ffff8805829c0b80 ffff88057d9bffd8
Mar 11 09:35:26 chg-sto-102 kernel: ffff88057d9bffd8 ffff88057d9bffd8 ffff8805829c0b80 ffff8805829c0b80
Mar 11 09:35:26 chg-sto-102 kernel: ffff8807f22e6518 ffffffffffffffff ffff8807f22e6520 ffff88062bb28940
Mar 11 09:35:26 chg-sto-102 kernel: Call Trace:
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8163a879>] schedule+0x29/0x70
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8163c235>] rwsem_down_read_failed+0xf5/0x170
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff81301764>] call_rwsem_down_read_failed+0x14/0x30
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000adba>] ? spl_kmem_free+0x2a/0x40 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff81639a90>] ? down_read+0x20/0x30
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00dfaf4>] dmu_zfetch+0xf4/0xc80 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00c5a0c>] dbuf_read+0x78c/0x880 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000adba>] ? spl_kmem_free+0x2a/0x40 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00c720b>] ? dbuf_hold_impl+0x8b/0xa0 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00e17d1>] dnode_hold_impl+0xe1/0x590 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00e1c99>] dnode_hold+0x19/0x20 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00cfeea>] dmu_buf_hold_noread+0x3a/0x100 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00cffdf>] dmu_buf_hold+0x2f/0x80 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff810be2ee>] ? account_entity_dequeue+0xae/0xd0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa0139628>] zap_lockdir+0x58/0x750 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff810c219e>] ? dequeue_task_fair+0x40e/0x620
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff810bb685>] ? sched_clock_cpu+0x85/0xc0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8101360b>] ? __switch_to+0x17b/0x4b0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa013a12b>] zap_lookup_norm+0x5b/0x1d0 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa013a2d3>] zap_lookup+0x33/0x40 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa0143230>] zfs_dirent_lock+0x4e0/0x550 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa014333a>] zfs_dirlook+0x9a/0x2d0 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa0158ca1>] zfs_lookup+0x2f1/0x340 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa01751a8>] zpl_lookup+0xd8/0x1f0 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811f77fa>] ? d_alloc+0x5a/0x70
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811e864d>] lookup_real+0x1d/0x50
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811e8fc2>] __lookup_hash+0x42/0x60
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff81632498>] lookup_slow+0x42/0xa7
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811ede93>] path_lookupat+0x773/0x7a0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811c0e05>] ? kmem_cache_alloc+0x35/0x1d0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811eeb8f>] ? getname_flags+0x4f/0x1a0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811edeeb>] filename_lookup+0x2b/0xc0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811efcb7>] user_path_at_empty+0x67/0xc0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff81101072>] ? from_kgid_munged+0x12/0x20
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811e3a6f>] ? cp_new_stat+0x14f/0x180
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811efd21>] user_path_at+0x11/0x20
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811e3563>] vfs_fstatat+0x63/0xc0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811e3ace>] SYSC_newstat+0x2e/0x60
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8110b796>] ? __audit_syscall_exit+0x1e6/0x280
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811e3dae>] SyS_newstat+0xe/0x10
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff816458c9>] system_call_fastpath+0x16/0x1b
Mar 11 09:35:26 chg-sto-102 kernel: INFO: task smbd:23279 blocked for more than 120 seconds.
Mar 11 09:35:26 chg-sto-102 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Mar 11 09:35:26 chg-sto-102 kernel: smbd            D ffff88063c11acb8     0 23279  10720 0x00000080
Mar 11 09:35:26 chg-sto-102 kernel: ffff8804ad01b598 0000000000000082 ffff8807ffadd080 ffff8804ad01bfd8
Mar 11 09:35:26 chg-sto-102 kernel: ffff8804ad01bfd8 ffff8804ad01bfd8 ffff8807ffadd080 ffff8807ffadd080
Mar 11 09:35:26 chg-sto-102 kernel: ffff8807f22e6518 ffffffffffffffff ffff8807f22e6520 ffff88063c11acb8
Mar 11 09:35:26 chg-sto-102 kernel: Call Trace:
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8163a879>] schedule+0x29/0x70
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8163c235>] rwsem_down_read_failed+0xf5/0x170
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff81301764>] call_rwsem_down_read_failed+0x14/0x30
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000adba>] ? spl_kmem_free+0x2a/0x40 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff81639a90>] ? down_read+0x20/0x30
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00dfaf4>] dmu_zfetch+0xf4/0xc80 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa012c514>] ? __vdev_disk_physio+0x3e4/0x450 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00c5a0c>] dbuf_read+0x78c/0x880 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000adba>] ? spl_kmem_free+0x2a/0x40 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00c720b>] ? dbuf_hold_impl+0x8b/0xa0 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00e17d1>] dnode_hold_impl+0xe1/0x590 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00e1c99>] dnode_hold+0x19/0x20 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00cfeea>] dmu_buf_hold_noread+0x3a/0x100 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00cffdf>] dmu_buf_hold+0x2f/0x80 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff810be2ee>] ? account_entity_dequeue+0xae/0xd0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa0139628>] zap_lockdir+0x58/0x750 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff810c219e>] ? dequeue_task_fair+0x40e/0x620
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff810bb685>] ? sched_clock_cpu+0x85/0xc0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8101360b>] ? __switch_to+0x17b/0x4b0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa013a12b>] zap_lookup_norm+0x5b/0x1d0 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa013a2d3>] zap_lookup+0x33/0x40 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa0143230>] zfs_dirent_lock+0x4e0/0x550 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa014333a>] zfs_dirlook+0x9a/0x2d0 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa0158ca1>] zfs_lookup+0x2f1/0x340 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa01751a8>] zpl_lookup+0xd8/0x1f0 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811f77fa>] ? d_alloc+0x5a/0x70
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811e864d>] lookup_real+0x1d/0x50
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811e8fc2>] __lookup_hash+0x42/0x60
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff81632498>] lookup_slow+0x42/0xa7
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811ede93>] path_lookupat+0x773/0x7a0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff810b5d85>] ? check_preempt_curr+0x75/0xa0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff810b5dc9>] ? ttwu_do_wakeup+0x19/0xd0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811c0e05>] ? kmem_cache_alloc+0x35/0x1d0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811eeb8f>] ? getname_flags+0x4f/0x1a0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811edeeb>] filename_lookup+0x2b/0xc0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811efcb7>] user_path_at_empty+0x67/0xc0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff810b0d84>] ? __wake_up+0x44/0x50
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811efd21>] user_path_at+0x11/0x20
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811e3563>] vfs_fstatat+0x63/0xc0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811e3ace>] SYSC_newstat+0x2e/0x60
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff810ac6af>] ? __put_cred+0x3f/0x50
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff810aca14>] ? commit_creds+0x124/0x260
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8110b796>] ? __audit_syscall_exit+0x1e6/0x280
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811e3dae>] SyS_newstat+0xe/0x10
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff816458c9>] system_call_fastpath+0x16/0x1b
Mar 11 09:35:26 chg-sto-102 kernel: INFO: task smbd:6069 blocked for more than 120 seconds.
Mar 11 09:35:26 chg-sto-102 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Mar 11 09:35:26 chg-sto-102 kernel: smbd            D ffff8806fbf7b5f8     0  6069  10720 0x00000080
Mar 11 09:35:26 chg-sto-102 kernel: ffff8805be2bb8c8 0000000000000086 ffff880562f78b80 ffff8805be2bbfd8
Mar 11 09:35:26 chg-sto-102 kernel: ffff8805be2bbfd8 ffff8805be2bbfd8 ffff880562f78b80 ffff880562f78b80
Mar 11 09:35:26 chg-sto-102 kernel: ffff8807f22e6518 ffffffffffffffff ffff8807f22e6520 ffff8806fbf7b5f8
Mar 11 09:35:26 chg-sto-102 kernel: Call Trace:
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8163a879>] schedule+0x29/0x70
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8163c235>] rwsem_down_read_failed+0xf5/0x170
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff81171bcf>] ? __free_pages+0x3f/0x60
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff81301764>] call_rwsem_down_read_failed+0x14/0x30
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000adba>] ? spl_kmem_free+0x2a/0x40 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff81639a90>] ? down_read+0x20/0x30
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00dfaf4>] dmu_zfetch+0xf4/0xc80 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff810a97e6>] ? __remove_hrtimer+0x46/0xe0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00c5a0c>] dbuf_read+0x78c/0x880 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000adba>] ? spl_kmem_free+0x2a/0x40 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00c720b>] ? dbuf_hold_impl+0x8b/0xa0 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00e17d1>] dnode_hold_impl+0xe1/0x590 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8151a196>] ? skb_release_data+0xd6/0x110
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811c332c>] ? __kmalloc_node+0x5c/0x280
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811c340d>] ? __kmalloc_node+0x13d/0x280
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000b047>] ? spl_kmem_alloc+0xc7/0x170 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000b1b0>] ? spl_kmem_zalloc+0xc0/0x170 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00e1c99>] dnode_hold+0x19/0x20 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00ddaa2>] dmu_tx_hold_object_impl.isra.5+0x42/0x150 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00ddbc5>] dmu_tx_hold_bonus+0x15/0x30 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00de98c>] dmu_tx_hold_sa+0x3c/0x180 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa015ea3d>] zfs_write+0x54d/0xc50 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff815a109b>] ? inet_recvmsg+0x7b/0xa0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8150ff66>] ? sock_aio_read.part.7+0x146/0x160
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa01746a5>] zpl_write_common_iovec.constprop.8+0x95/0x100 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa017481e>] zpl_write+0x7e/0xb0 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811de54d>] vfs_write+0xbd/0x1e0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811df1a2>] SyS_pwrite64+0x92/0xc0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff816458c9>] system_call_fastpath+0x16/0x1b
Mar 11 09:35:26 chg-sto-102 kernel: INFO: task smbd:7858 blocked for more than 120 seconds.
Mar 11 09:35:26 chg-sto-102 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Mar 11 09:35:26 chg-sto-102 kernel: smbd            D ffff8805442f9970     0  7858  10720 0x00000080
Mar 11 09:35:26 chg-sto-102 kernel: ffff88008e52b598 0000000000000082 ffff880356c44500 ffff88008e52bfd8
Mar 11 09:35:26 chg-sto-102 kernel: ffff88008e52bfd8 ffff88008e52bfd8 ffff880356c44500 ffff880356c44500
Mar 11 09:35:26 chg-sto-102 kernel: ffff8807f22e6518 ffffffffffffffff ffff8807f22e6520 ffff8805442f9970
Mar 11 09:35:26 chg-sto-102 kernel: Call Trace:
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8163a879>] schedule+0x29/0x70
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8163c235>] rwsem_down_read_failed+0xf5/0x170
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff81301764>] call_rwsem_down_read_failed+0x14/0x30
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000adba>] ? spl_kmem_free+0x2a/0x40 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff81639a90>] ? down_read+0x20/0x30
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00dfaf4>] dmu_zfetch+0xf4/0xc80 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00c5a0c>] dbuf_read+0x78c/0x880 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000adba>] ? spl_kmem_free+0x2a/0x40 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00c720b>] ? dbuf_hold_impl+0x8b/0xa0 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00e17d1>] dnode_hold_impl+0xe1/0x590 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00e1c99>] dnode_hold+0x19/0x20 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00cfeea>] dmu_buf_hold_noread+0x3a/0x100 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00cffdf>] dmu_buf_hold+0x2f/0x80 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff810be2ee>] ? account_entity_dequeue+0xae/0xd0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa0139628>] zap_lockdir+0x58/0x750 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff810c219e>] ? dequeue_task_fair+0x40e/0x620
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff810bb685>] ? sched_clock_cpu+0x85/0xc0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8101360b>] ? __switch_to+0x17b/0x4b0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa013a12b>] zap_lookup_norm+0x5b/0x1d0 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa013a2d3>] zap_lookup+0x33/0x40 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa0143230>] zfs_dirent_lock+0x4e0/0x550 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa014333a>] zfs_dirlook+0x9a/0x2d0 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa0158ca1>] zfs_lookup+0x2f1/0x340 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa01751a8>] zpl_lookup+0xd8/0x1f0 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811f77fa>] ? d_alloc+0x5a/0x70
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811e864d>] lookup_real+0x1d/0x50
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811e8fc2>] __lookup_hash+0x42/0x60
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff81632498>] lookup_slow+0x42/0xa7
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811ede93>] path_lookupat+0x773/0x7a0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811c0e05>] ? kmem_cache_alloc+0x35/0x1d0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811eeb8f>] ? getname_flags+0x4f/0x1a0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811edeeb>] filename_lookup+0x2b/0xc0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811efcb7>] user_path_at_empty+0x67/0xc0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff81101072>] ? from_kgid_munged+0x12/0x20
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811e3a6f>] ? cp_new_stat+0x14f/0x180
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811efd21>] user_path_at+0x11/0x20
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811e3563>] vfs_fstatat+0x63/0xc0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811e3ace>] SYSC_newstat+0x2e/0x60
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8110b796>] ? __audit_syscall_exit+0x1e6/0x280
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811e3dae>] SyS_newstat+0xe/0x10
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff816458c9>] system_call_fastpath+0x16/0x1b
Mar 11 09:35:26 chg-sto-102 kernel: INFO: task smbd:21086 blocked for more than 120 seconds.
Mar 11 09:35:26 chg-sto-102 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Mar 11 09:35:26 chg-sto-102 kernel: smbd            D ffff8807f22e6528     0 21086  10720 0x00000080
Mar 11 09:35:26 chg-sto-102 kernel: ffff8800ab17f8b8 0000000000000082 ffff88032ef49700 ffff8800ab17ffd8
Mar 11 09:35:26 chg-sto-102 kernel: ffff8800ab17ffd8 ffff8800ab17ffd8 ffff88032ef49700 ffff88032ef49700
Mar 11 09:35:26 chg-sto-102 kernel: ffff8807f22e6518 ffff8807f22e6520 ffffffff00000000 ffff8807f22e6528
Mar 11 09:35:26 chg-sto-102 kernel: Call Trace:
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8163a879>] schedule+0x29/0x70
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8163c035>] rwsem_down_write_failed+0x115/0x220
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff810be200>] ? account_entity_enqueue+0x70/0xb0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff81301793>] call_rwsem_down_write_failed+0x13/0x20
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000b1b0>] ? spl_kmem_zalloc+0xc0/0x170 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff81639acd>] ? down_write+0x2d/0x30
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00e01c4>] dmu_zfetch+0x7c4/0xc80 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00c5a0c>] dbuf_read+0x78c/0x880 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000adba>] ? spl_kmem_free+0x2a/0x40 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00c720b>] ? dbuf_hold_impl+0x8b/0xa0 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00e17d1>] dnode_hold_impl+0xe1/0x590 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811c332c>] ? __kmalloc_node+0x5c/0x280
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811c340d>] ? __kmalloc_node+0x13d/0x280
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000b047>] ? spl_kmem_alloc+0xc7/0x170 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000b1b0>] ? spl_kmem_zalloc+0xc0/0x170 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00e1c99>] dnode_hold+0x19/0x20 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00ddaa2>] dmu_tx_hold_object_impl.isra.5+0x42/0x150 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00ddbc5>] dmu_tx_hold_bonus+0x15/0x30 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00de98c>] dmu_tx_hold_sa+0x3c/0x180 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa015ea3d>] zfs_write+0x54d/0xc50 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff815a109b>] ? inet_recvmsg+0x7b/0xa0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8150ff66>] ? sock_aio_read.part.7+0x146/0x160
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa01746a5>] zpl_write_common_iovec.constprop.8+0x95/0x100 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa017481e>] zpl_write+0x7e/0xb0 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811de54d>] vfs_write+0xbd/0x1e0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811df1a2>] SyS_pwrite64+0x92/0xc0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff816458c9>] system_call_fastpath+0x16/0x1b
Mar 11 09:35:26 chg-sto-102 kernel: INFO: task smbd:15219 blocked for more than 120 seconds.
Mar 11 09:35:26 chg-sto-102 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Mar 11 09:35:26 chg-sto-102 kernel: smbd            D ffff88055dfd1ce8     0 15219  10720 0x00000080
Mar 11 09:35:26 chg-sto-102 kernel: ffff8804ff24f8c8 0000000000000086 ffff880657c29700 ffff8804ff24ffd8
Mar 11 09:35:26 chg-sto-102 kernel: ffff8804ff24ffd8 ffff8804ff24ffd8 ffff880657c29700 ffff880657c29700
Mar 11 09:35:26 chg-sto-102 kernel: ffff8807f22e6518 ffffffffffffffff ffff8807f22e6520 ffff88055dfd1ce8
Mar 11 09:35:26 chg-sto-102 kernel: Call Trace:
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8163a879>] schedule+0x29/0x70
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8163c235>] rwsem_down_read_failed+0xf5/0x170
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff81301764>] call_rwsem_down_read_failed+0x14/0x30
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000adba>] ? spl_kmem_free+0x2a/0x40 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff81639a90>] ? down_read+0x20/0x30
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00dfaf4>] dmu_zfetch+0xf4/0xc80 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff810a97e6>] ? __remove_hrtimer+0x46/0xe0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00c5a0c>] dbuf_read+0x78c/0x880 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000adba>] ? spl_kmem_free+0x2a/0x40 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00c720b>] ? dbuf_hold_impl+0x8b/0xa0 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00e17d1>] dnode_hold_impl+0xe1/0x590 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811c332c>] ? __kmalloc_node+0x5c/0x280
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811c340d>] ? __kmalloc_node+0x13d/0x280
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000b047>] ? spl_kmem_alloc+0xc7/0x170 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa000b1b0>] ? spl_kmem_zalloc+0xc0/0x170 [spl]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00e1c99>] dnode_hold+0x19/0x20 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00ddaa2>] dmu_tx_hold_object_impl.isra.5+0x42/0x150 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00ddbc5>] dmu_tx_hold_bonus+0x15/0x30 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa00de98c>] dmu_tx_hold_sa+0x3c/0x180 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa015ea3d>] zfs_write+0x54d/0xc50 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff815a109b>] ? inet_recvmsg+0x7b/0xa0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff8150ff66>] ? sock_aio_read.part.7+0x146/0x160
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa01746a5>] zpl_write_common_iovec.constprop.8+0x95/0x100 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffffa017481e>] zpl_write+0x7e/0xb0 [zfs]
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811de54d>] vfs_write+0xbd/0x1e0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff811df1a2>] SyS_pwrite64+0x92/0xc0
Mar 11 09:35:26 chg-sto-102 kernel: [<ffffffff816458c9>] system_call_fastpath+0x16/0x1b
tuxoko commented 8 years ago

@wipash If I'm correct you don't have this patch in spl? https://github.com/zfsonlinux/spl/commit/e843553d0341c4edd880f5ad40da8211669348bd If so, you're likely want to do an upgrade.

foosinn commented 8 years ago

Hello,

i have excat the same issues with 0.6.5.6 on ubuntu trusty.

seems to happen mostly when deleting a lot of files. The delete process and ark_prune get 100% cpu load. sometimes its also caused by a rsync process. The server is under heavy load and has the issue every one to three days.

foosinn commented 8 years ago

We have moved to 4.4 kernel on our servers, the issue is gone. The system is stable since two weeks. This also worked on a Debian Jessy Server, which had the same issues.

behlendorf commented 8 years ago

@foosinn were you previously observing this issue with a 3.10 kernel?

foosinn commented 8 years ago

I had 100% stock Debian Jessie (3.16) and Ubuntu Trusty (3.13) kernels.

On Debian i compiled the modules from source, since the repo was once outdated. On Ubuntu i used the repo.

waltersteinlein commented 7 years ago

I'm seeing similar things on 0.7.0-rc2 with kernel 3.10.0 on CentOS 7.2:

[Sa Jan 14 00:36:08 2017] INFO: task send_traverse:145465 blocked for more than 120 seconds.
[Sa Jan 14 00:36:08 2017] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Sa Jan 14 00:36:08 2017] send_traverse   D 0000000000000000     0 145465      2 0x00000080
[Sa Jan 14 00:36:08 2017]  ffff881f61553690 0000000000000046 ffff880a9e173980 ffff881f61553fd8
[Sa Jan 14 00:36:08 2017]  ffff881f61553fd8 ffff881f61553fd8 ffff880a9e173980 ffff8814c33bfb98
[Sa Jan 14 00:36:08 2017]  ffff8814c33bfb68 ffff8814c33bfba0 ffff8814c33bfb90 0000000000000000
[Sa Jan 14 00:36:08 2017] Call Trace:
[Sa Jan 14 00:36:08 2017]  [<ffffffff8163bb59>] schedule+0x29/0x70
[Sa Jan 14 00:36:08 2017]  [<ffffffffa035c1ad>] cv_wait_common+0x10d/0x130 [spl]
[Sa Jan 14 00:36:08 2017]  [<ffffffff810a6b20>] ? wake_up_atomic_t+0x30/0x30
[Sa Jan 14 00:36:08 2017]  [<ffffffffa035c1e5>] __cv_wait+0x15/0x20 [spl]
[Sa Jan 14 00:36:08 2017]  [<ffffffffa147bacb>] bqueue_enqueue+0x5b/0xc0 [zfs]
[Sa Jan 14 00:36:08 2017]  [<ffffffffa148b8fa>] send_cb+0x15a/0x180 [zfs]
[Sa Jan 14 00:36:08 2017]  [<ffffffffa148ecf9>] traverse_visitbp+0x4e9/0x840 [zfs]
[Sa Jan 14 00:36:08 2017]  [<ffffffffa03551b0>] ? spl_kmem_zalloc+0xc0/0x170 [spl]
[Sa Jan 14 00:36:08 2017]  [<ffffffffa148f7f0>] traverse_dnode+0x1d0/0x1f0 [zfs]
[Sa Jan 14 00:36:08 2017]  [<ffffffffa148eecb>] traverse_visitbp+0x6bb/0x840 [zfs]
[Sa Jan 14 00:36:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:36:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:36:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:36:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:36:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:36:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:36:08 2017]  [<ffffffffa148f6d1>] traverse_dnode+0xb1/0x1f0 [zfs]
[Sa Jan 14 00:36:08 2017]  [<ffffffffa148ef95>] traverse_visitbp+0x785/0x840 [zfs]
[Sa Jan 14 00:36:08 2017]  [<ffffffffa148f221>] traverse_impl+0x1d1/0x450 [zfs]
[Sa Jan 14 00:36:08 2017]  [<ffffffffa148b7a0>] ? dmu_send_impl+0x1690/0x1690 [zfs]
[Sa Jan 14 00:36:08 2017]  [<ffffffffa148f865>] traverse_dataset_resume+0x55/0x60 [zfs]
[Sa Jan 14 00:36:08 2017]  [<ffffffffa148b7a0>] ? dmu_send_impl+0x1690/0x1690 [zfs]
[Sa Jan 14 00:36:08 2017]  [<ffffffffa1487ab9>] send_traverse_thread+0x59/0xb0 [zfs]
[Sa Jan 14 00:36:08 2017]  [<ffffffffa1487a60>] ? byteswap_record+0x2e0/0x2e0 [zfs]
[Sa Jan 14 00:36:08 2017]  [<ffffffffa0356f01>] thread_generic_wrapper+0x71/0x80 [spl]
[Sa Jan 14 00:36:08 2017]  [<ffffffffa0356e90>] ? __thread_exit+0x20/0x20 [spl]
[Sa Jan 14 00:36:08 2017]  [<ffffffff810a5b2f>] kthread+0xcf/0xe0
[Sa Jan 14 00:36:08 2017]  [<ffffffff810b4856>] ? finish_task_switch+0x56/0x170
[Sa Jan 14 00:36:08 2017]  [<ffffffff810a5a60>] ? kthread_create_on_node+0x140/0x140
[Sa Jan 14 00:36:08 2017]  [<ffffffff81646a98>] ret_from_fork+0x58/0x90
[Sa Jan 14 00:36:08 2017]  [<ffffffff810a5a60>] ? kthread_create_on_node+0x140/0x140
[Sa Jan 14 00:38:08 2017] INFO: task send_traverse:145465 blocked for more than 120 seconds.
[Sa Jan 14 00:38:08 2017] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Sa Jan 14 00:38:08 2017] send_traverse   D 0000000000000000     0 145465      2 0x00000080
[Sa Jan 14 00:38:08 2017]  ffff881f61553690 0000000000000046 ffff880a9e173980 ffff881f61553fd8
[Sa Jan 14 00:38:08 2017]  ffff881f61553fd8 ffff881f61553fd8 ffff880a9e173980 ffff8814c33bfb98
[Sa Jan 14 00:38:08 2017]  ffff8814c33bfb68 ffff8814c33bfba0 ffff8814c33bfb90 0000000000000000
[Sa Jan 14 00:38:08 2017] Call Trace:
[Sa Jan 14 00:38:08 2017]  [<ffffffff8163bb59>] schedule+0x29/0x70
[Sa Jan 14 00:38:08 2017]  [<ffffffffa035c1ad>] cv_wait_common+0x10d/0x130 [spl]
[Sa Jan 14 00:38:08 2017]  [<ffffffff810a6b20>] ? wake_up_atomic_t+0x30/0x30
[Sa Jan 14 00:38:08 2017]  [<ffffffffa035c1e5>] __cv_wait+0x15/0x20 [spl]
[Sa Jan 14 00:38:08 2017]  [<ffffffffa147bacb>] bqueue_enqueue+0x5b/0xc0 [zfs]
[Sa Jan 14 00:38:08 2017]  [<ffffffffa148b8fa>] send_cb+0x15a/0x180 [zfs]
[Sa Jan 14 00:38:08 2017]  [<ffffffffa148ecf9>] traverse_visitbp+0x4e9/0x840 [zfs]
[Sa Jan 14 00:38:08 2017]  [<ffffffffa03551b0>] ? spl_kmem_zalloc+0xc0/0x170 [spl]
[Sa Jan 14 00:38:08 2017]  [<ffffffffa148f7f0>] traverse_dnode+0x1d0/0x1f0 [zfs]
[Sa Jan 14 00:38:08 2017]  [<ffffffffa148eecb>] traverse_visitbp+0x6bb/0x840 [zfs]
[Sa Jan 14 00:38:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:38:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:38:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:38:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:38:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:38:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:38:08 2017]  [<ffffffffa148f6d1>] traverse_dnode+0xb1/0x1f0 [zfs]
[Sa Jan 14 00:38:08 2017]  [<ffffffffa148ef95>] traverse_visitbp+0x785/0x840 [zfs]
[Sa Jan 14 00:38:08 2017]  [<ffffffffa148f221>] traverse_impl+0x1d1/0x450 [zfs]
[Sa Jan 14 00:38:08 2017]  [<ffffffffa148b7a0>] ? dmu_send_impl+0x1690/0x1690 [zfs]
[Sa Jan 14 00:38:08 2017]  [<ffffffffa148f865>] traverse_dataset_resume+0x55/0x60 [zfs]
[Sa Jan 14 00:38:08 2017]  [<ffffffffa148b7a0>] ? dmu_send_impl+0x1690/0x1690 [zfs]
[Sa Jan 14 00:38:08 2017]  [<ffffffffa1487ab9>] send_traverse_thread+0x59/0xb0 [zfs]
[Sa Jan 14 00:38:08 2017]  [<ffffffffa1487a60>] ? byteswap_record+0x2e0/0x2e0 [zfs]
[Sa Jan 14 00:38:08 2017]  [<ffffffffa0356f01>] thread_generic_wrapper+0x71/0x80 [spl]
[Sa Jan 14 00:38:08 2017]  [<ffffffffa0356e90>] ? __thread_exit+0x20/0x20 [spl]
[Sa Jan 14 00:38:08 2017]  [<ffffffff810a5b2f>] kthread+0xcf/0xe0
[Sa Jan 14 00:38:08 2017]  [<ffffffff810b4856>] ? finish_task_switch+0x56/0x170
[Sa Jan 14 00:38:08 2017]  [<ffffffff810a5a60>] ? kthread_create_on_node+0x140/0x140
[Sa Jan 14 00:38:08 2017]  [<ffffffff81646a98>] ret_from_fork+0x58/0x90
[Sa Jan 14 00:38:08 2017]  [<ffffffff810a5a60>] ? kthread_create_on_node+0x140/0x140
[Sa Jan 14 00:40:08 2017] INFO: task send_traverse:145465 blocked for more than 120 seconds.
[Sa Jan 14 00:40:08 2017] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Sa Jan 14 00:40:08 2017] send_traverse   D 0000000000000000     0 145465      2 0x00000080
[Sa Jan 14 00:40:08 2017]  ffff881f61553690 0000000000000046 ffff880a9e173980 ffff881f61553fd8
[Sa Jan 14 00:40:08 2017]  ffff881f61553fd8 ffff881f61553fd8 ffff880a9e173980 ffff8814c33bfb98
[Sa Jan 14 00:40:08 2017]  ffff8814c33bfb68 ffff8814c33bfba0 ffff8814c33bfb90 0000000000000000
[Sa Jan 14 00:40:08 2017] Call Trace:
[Sa Jan 14 00:40:08 2017]  [<ffffffff8163bb59>] schedule+0x29/0x70
[Sa Jan 14 00:40:08 2017]  [<ffffffffa035c1ad>] cv_wait_common+0x10d/0x130 [spl]
[Sa Jan 14 00:40:08 2017]  [<ffffffff810a6b20>] ? wake_up_atomic_t+0x30/0x30
[Sa Jan 14 00:40:08 2017]  [<ffffffffa035c1e5>] __cv_wait+0x15/0x20 [spl]
[Sa Jan 14 00:40:08 2017]  [<ffffffffa147bacb>] bqueue_enqueue+0x5b/0xc0 [zfs]
[Sa Jan 14 00:40:08 2017]  [<ffffffffa148b8fa>] send_cb+0x15a/0x180 [zfs]
[Sa Jan 14 00:40:08 2017]  [<ffffffffa148ecf9>] traverse_visitbp+0x4e9/0x840 [zfs]
[Sa Jan 14 00:40:08 2017]  [<ffffffffa03551b0>] ? spl_kmem_zalloc+0xc0/0x170 [spl]
[Sa Jan 14 00:40:08 2017]  [<ffffffffa148f7f0>] traverse_dnode+0x1d0/0x1f0 [zfs]
[Sa Jan 14 00:40:08 2017]  [<ffffffffa148eecb>] traverse_visitbp+0x6bb/0x840 [zfs]
[Sa Jan 14 00:40:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:40:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:40:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:40:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:40:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:40:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:40:08 2017]  [<ffffffffa148f6d1>] traverse_dnode+0xb1/0x1f0 [zfs]
[Sa Jan 14 00:40:08 2017]  [<ffffffffa148ef95>] traverse_visitbp+0x785/0x840 [zfs]
[Sa Jan 14 00:40:08 2017]  [<ffffffffa148f221>] traverse_impl+0x1d1/0x450 [zfs]
[Sa Jan 14 00:40:08 2017]  [<ffffffffa148b7a0>] ? dmu_send_impl+0x1690/0x1690 [zfs]
[Sa Jan 14 00:40:08 2017]  [<ffffffffa148f865>] traverse_dataset_resume+0x55/0x60 [zfs]
[Sa Jan 14 00:40:08 2017]  [<ffffffffa148b7a0>] ? dmu_send_impl+0x1690/0x1690 [zfs]
[Sa Jan 14 00:40:08 2017]  [<ffffffffa1487ab9>] send_traverse_thread+0x59/0xb0 [zfs]
[Sa Jan 14 00:40:08 2017]  [<ffffffffa1487a60>] ? byteswap_record+0x2e0/0x2e0 [zfs]
[Sa Jan 14 00:40:08 2017]  [<ffffffffa0356f01>] thread_generic_wrapper+0x71/0x80 [spl]
[Sa Jan 14 00:40:08 2017]  [<ffffffffa0356e90>] ? __thread_exit+0x20/0x20 [spl]
[Sa Jan 14 00:40:08 2017]  [<ffffffff810a5b2f>] kthread+0xcf/0xe0
[Sa Jan 14 00:40:08 2017]  [<ffffffff810b4856>] ? finish_task_switch+0x56/0x170
[Sa Jan 14 00:40:08 2017]  [<ffffffff810a5a60>] ? kthread_create_on_node+0x140/0x140
[Sa Jan 14 00:40:08 2017]  [<ffffffff81646a98>] ret_from_fork+0x58/0x90
[Sa Jan 14 00:40:08 2017]  [<ffffffff810a5a60>] ? kthread_create_on_node+0x140/0x140
[Sa Jan 14 00:42:08 2017] INFO: task send_traverse:145465 blocked for more than 120 seconds.
[Sa Jan 14 00:42:08 2017] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Sa Jan 14 00:42:08 2017] send_traverse   D 0000000000000000     0 145465      2 0x00000080
[Sa Jan 14 00:42:08 2017]  ffff881f61553690 0000000000000046 ffff880a9e173980 ffff881f61553fd8
[Sa Jan 14 00:42:08 2017]  ffff881f61553fd8 ffff881f61553fd8 ffff880a9e173980 ffff8814c33bfb98
[Sa Jan 14 00:42:08 2017]  ffff8814c33bfb68 ffff8814c33bfba0 ffff8814c33bfb90 0000000000000000
[Sa Jan 14 00:42:08 2017] Call Trace:
[Sa Jan 14 00:42:08 2017]  [<ffffffff8163bb59>] schedule+0x29/0x70
[Sa Jan 14 00:42:08 2017]  [<ffffffffa035c1ad>] cv_wait_common+0x10d/0x130 [spl]
[Sa Jan 14 00:42:08 2017]  [<ffffffff810a6b20>] ? wake_up_atomic_t+0x30/0x30
[Sa Jan 14 00:42:08 2017]  [<ffffffffa035c1e5>] __cv_wait+0x15/0x20 [spl]
[Sa Jan 14 00:42:08 2017]  [<ffffffffa147bacb>] bqueue_enqueue+0x5b/0xc0 [zfs]
[Sa Jan 14 00:42:08 2017]  [<ffffffffa148b8fa>] send_cb+0x15a/0x180 [zfs]
[Sa Jan 14 00:42:08 2017]  [<ffffffffa148ecf9>] traverse_visitbp+0x4e9/0x840 [zfs]
[Sa Jan 14 00:42:08 2017]  [<ffffffffa03551b0>] ? spl_kmem_zalloc+0xc0/0x170 [spl]
[Sa Jan 14 00:42:08 2017]  [<ffffffffa148f7f0>] traverse_dnode+0x1d0/0x1f0 [zfs]
[Sa Jan 14 00:42:08 2017]  [<ffffffffa148eecb>] traverse_visitbp+0x6bb/0x840 [zfs]
[Sa Jan 14 00:42:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:42:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:42:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:42:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:42:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:42:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:42:08 2017]  [<ffffffffa148f6d1>] traverse_dnode+0xb1/0x1f0 [zfs]
[Sa Jan 14 00:42:08 2017]  [<ffffffffa148ef95>] traverse_visitbp+0x785/0x840 [zfs]
[Sa Jan 14 00:42:08 2017]  [<ffffffffa148f221>] traverse_impl+0x1d1/0x450 [zfs]
[Sa Jan 14 00:42:08 2017]  [<ffffffffa148b7a0>] ? dmu_send_impl+0x1690/0x1690 [zfs]
[Sa Jan 14 00:42:08 2017]  [<ffffffffa148f865>] traverse_dataset_resume+0x55/0x60 [zfs]
[Sa Jan 14 00:42:08 2017]  [<ffffffffa148b7a0>] ? dmu_send_impl+0x1690/0x1690 [zfs]
[Sa Jan 14 00:42:08 2017]  [<ffffffffa1487ab9>] send_traverse_thread+0x59/0xb0 [zfs]
[Sa Jan 14 00:42:08 2017]  [<ffffffffa1487a60>] ? byteswap_record+0x2e0/0x2e0 [zfs]
[Sa Jan 14 00:42:08 2017]  [<ffffffffa0356f01>] thread_generic_wrapper+0x71/0x80 [spl]
[Sa Jan 14 00:42:08 2017]  [<ffffffffa0356e90>] ? __thread_exit+0x20/0x20 [spl]
[Sa Jan 14 00:42:08 2017]  [<ffffffff810a5b2f>] kthread+0xcf/0xe0
[Sa Jan 14 00:42:08 2017]  [<ffffffff810b4856>] ? finish_task_switch+0x56/0x170
[Sa Jan 14 00:42:08 2017]  [<ffffffff810a5a60>] ? kthread_create_on_node+0x140/0x140
[Sa Jan 14 00:42:08 2017]  [<ffffffff81646a98>] ret_from_fork+0x58/0x90
[Sa Jan 14 00:42:08 2017]  [<ffffffff810a5a60>] ? kthread_create_on_node+0x140/0x140
[Sa Jan 14 00:44:08 2017] INFO: task send_traverse:145465 blocked for more than 120 seconds.
[Sa Jan 14 00:44:08 2017] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Sa Jan 14 00:44:08 2017] send_traverse   D 0000000000000000     0 145465      2 0x00000080
[Sa Jan 14 00:44:08 2017]  ffff881f61553690 0000000000000046 ffff880a9e173980 ffff881f61553fd8
[Sa Jan 14 00:44:08 2017]  ffff881f61553fd8 ffff881f61553fd8 ffff880a9e173980 ffff8814c33bfb98
[Sa Jan 14 00:44:08 2017]  ffff8814c33bfb68 ffff8814c33bfba0 ffff8814c33bfb90 0000000000000000
[Sa Jan 14 00:44:08 2017] Call Trace:
[Sa Jan 14 00:44:08 2017]  [<ffffffff8163bb59>] schedule+0x29/0x70
[Sa Jan 14 00:44:08 2017]  [<ffffffffa035c1ad>] cv_wait_common+0x10d/0x130 [spl]
[Sa Jan 14 00:44:08 2017]  [<ffffffff810a6b20>] ? wake_up_atomic_t+0x30/0x30
[Sa Jan 14 00:44:08 2017]  [<ffffffffa035c1e5>] __cv_wait+0x15/0x20 [spl]
[Sa Jan 14 00:44:08 2017]  [<ffffffffa147bacb>] bqueue_enqueue+0x5b/0xc0 [zfs]
[Sa Jan 14 00:44:08 2017]  [<ffffffffa148b8fa>] send_cb+0x15a/0x180 [zfs]
[Sa Jan 14 00:44:08 2017]  [<ffffffffa148ecf9>] traverse_visitbp+0x4e9/0x840 [zfs]
[Sa Jan 14 00:44:08 2017]  [<ffffffffa03551b0>] ? spl_kmem_zalloc+0xc0/0x170 [spl]
[Sa Jan 14 00:44:08 2017]  [<ffffffffa148f7f0>] traverse_dnode+0x1d0/0x1f0 [zfs]
[Sa Jan 14 00:44:08 2017]  [<ffffffffa148eecb>] traverse_visitbp+0x6bb/0x840 [zfs]
[Sa Jan 14 00:44:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:44:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:44:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:44:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:44:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:44:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:44:08 2017]  [<ffffffffa148f6d1>] traverse_dnode+0xb1/0x1f0 [zfs]
[Sa Jan 14 00:44:08 2017]  [<ffffffffa148ef95>] traverse_visitbp+0x785/0x840 [zfs]
[Sa Jan 14 00:44:08 2017]  [<ffffffffa148f221>] traverse_impl+0x1d1/0x450 [zfs]
[Sa Jan 14 00:44:08 2017]  [<ffffffffa148b7a0>] ? dmu_send_impl+0x1690/0x1690 [zfs]
[Sa Jan 14 00:44:08 2017]  [<ffffffffa148f865>] traverse_dataset_resume+0x55/0x60 [zfs]
[Sa Jan 14 00:44:08 2017]  [<ffffffffa148b7a0>] ? dmu_send_impl+0x1690/0x1690 [zfs]
[Sa Jan 14 00:44:08 2017]  [<ffffffffa1487ab9>] send_traverse_thread+0x59/0xb0 [zfs]
[Sa Jan 14 00:44:08 2017]  [<ffffffffa1487a60>] ? byteswap_record+0x2e0/0x2e0 [zfs]
[Sa Jan 14 00:44:08 2017]  [<ffffffffa0356f01>] thread_generic_wrapper+0x71/0x80 [spl]
[Sa Jan 14 00:44:08 2017]  [<ffffffffa0356e90>] ? __thread_exit+0x20/0x20 [spl]
[Sa Jan 14 00:44:08 2017]  [<ffffffff810a5b2f>] kthread+0xcf/0xe0
[Sa Jan 14 00:44:08 2017]  [<ffffffff810b4856>] ? finish_task_switch+0x56/0x170
[Sa Jan 14 00:44:08 2017]  [<ffffffff810a5a60>] ? kthread_create_on_node+0x140/0x140
[Sa Jan 14 00:44:08 2017]  [<ffffffff81646a98>] ret_from_fork+0x58/0x90
[Sa Jan 14 00:44:08 2017]  [<ffffffff810a5a60>] ? kthread_create_on_node+0x140/0x140
[Sa Jan 14 00:46:08 2017] INFO: task send_traverse:145465 blocked for more than 120 seconds.
[Sa Jan 14 00:46:08 2017] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Sa Jan 14 00:46:08 2017] send_traverse   D 0000000000000000     0 145465      2 0x00000080
[Sa Jan 14 00:46:08 2017]  ffff881f61553690 0000000000000046 ffff880a9e173980 ffff881f61553fd8
[Sa Jan 14 00:46:08 2017]  ffff881f61553fd8 ffff881f61553fd8 ffff880a9e173980 ffff8814c33bfb98
[Sa Jan 14 00:46:08 2017]  ffff8814c33bfb68 ffff8814c33bfba0 ffff8814c33bfb90 0000000000000000
[Sa Jan 14 00:46:08 2017] Call Trace:
[Sa Jan 14 00:46:08 2017]  [<ffffffff8163bb59>] schedule+0x29/0x70
[Sa Jan 14 00:46:08 2017]  [<ffffffffa035c1ad>] cv_wait_common+0x10d/0x130 [spl]
[Sa Jan 14 00:46:08 2017]  [<ffffffff810a6b20>] ? wake_up_atomic_t+0x30/0x30
[Sa Jan 14 00:46:08 2017]  [<ffffffffa035c1e5>] __cv_wait+0x15/0x20 [spl]
[Sa Jan 14 00:46:08 2017]  [<ffffffffa147bacb>] bqueue_enqueue+0x5b/0xc0 [zfs]
[Sa Jan 14 00:46:08 2017]  [<ffffffffa148b8fa>] send_cb+0x15a/0x180 [zfs]
[Sa Jan 14 00:46:08 2017]  [<ffffffffa148ecf9>] traverse_visitbp+0x4e9/0x840 [zfs]
[Sa Jan 14 00:46:08 2017]  [<ffffffffa03551b0>] ? spl_kmem_zalloc+0xc0/0x170 [spl]
[Sa Jan 14 00:46:08 2017]  [<ffffffffa148f7f0>] traverse_dnode+0x1d0/0x1f0 [zfs]
[Sa Jan 14 00:46:08 2017]  [<ffffffffa148eecb>] traverse_visitbp+0x6bb/0x840 [zfs]
[Sa Jan 14 00:46:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:46:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:46:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:46:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:46:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:46:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:46:08 2017]  [<ffffffffa148f6d1>] traverse_dnode+0xb1/0x1f0 [zfs]
[Sa Jan 14 00:46:08 2017]  [<ffffffffa148ef95>] traverse_visitbp+0x785/0x840 [zfs]
[Sa Jan 14 00:46:08 2017]  [<ffffffffa148f221>] traverse_impl+0x1d1/0x450 [zfs]
[Sa Jan 14 00:46:08 2017]  [<ffffffffa148b7a0>] ? dmu_send_impl+0x1690/0x1690 [zfs]
[Sa Jan 14 00:46:08 2017]  [<ffffffffa148f865>] traverse_dataset_resume+0x55/0x60 [zfs]
[Sa Jan 14 00:46:08 2017]  [<ffffffffa148b7a0>] ? dmu_send_impl+0x1690/0x1690 [zfs]
[Sa Jan 14 00:46:08 2017]  [<ffffffffa1487ab9>] send_traverse_thread+0x59/0xb0 [zfs]
[Sa Jan 14 00:46:08 2017]  [<ffffffffa1487a60>] ? byteswap_record+0x2e0/0x2e0 [zfs]
[Sa Jan 14 00:46:08 2017]  [<ffffffffa0356f01>] thread_generic_wrapper+0x71/0x80 [spl]
[Sa Jan 14 00:46:08 2017]  [<ffffffffa0356e90>] ? __thread_exit+0x20/0x20 [spl]
[Sa Jan 14 00:46:08 2017]  [<ffffffff810a5b2f>] kthread+0xcf/0xe0
[Sa Jan 14 00:46:08 2017]  [<ffffffff810b4856>] ? finish_task_switch+0x56/0x170
[Sa Jan 14 00:46:08 2017]  [<ffffffff810a5a60>] ? kthread_create_on_node+0x140/0x140
[Sa Jan 14 00:46:08 2017]  [<ffffffff81646a98>] ret_from_fork+0x58/0x90
[Sa Jan 14 00:46:08 2017]  [<ffffffff810a5a60>] ? kthread_create_on_node+0x140/0x140
[Sa Jan 14 00:48:08 2017] INFO: task send_traverse:145465 blocked for more than 120 seconds.
[Sa Jan 14 00:48:08 2017] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Sa Jan 14 00:48:08 2017] send_traverse   D 0000000000000000     0 145465      2 0x00000080
[Sa Jan 14 00:48:08 2017]  ffff881f61553690 0000000000000046 ffff880a9e173980 ffff881f61553fd8
[Sa Jan 14 00:48:08 2017]  ffff881f61553fd8 ffff881f61553fd8 ffff880a9e173980 ffff8814c33bfb98
[Sa Jan 14 00:48:08 2017]  ffff8814c33bfb68 ffff8814c33bfba0 ffff8814c33bfb90 0000000000000000
[Sa Jan 14 00:48:08 2017] Call Trace:
[Sa Jan 14 00:48:08 2017]  [<ffffffff8163bb59>] schedule+0x29/0x70
[Sa Jan 14 00:48:08 2017]  [<ffffffffa035c1ad>] cv_wait_common+0x10d/0x130 [spl]
[Sa Jan 14 00:48:08 2017]  [<ffffffff810a6b20>] ? wake_up_atomic_t+0x30/0x30
[Sa Jan 14 00:48:08 2017]  [<ffffffffa035c1e5>] __cv_wait+0x15/0x20 [spl]
[Sa Jan 14 00:48:08 2017]  [<ffffffffa147bacb>] bqueue_enqueue+0x5b/0xc0 [zfs]
[Sa Jan 14 00:48:08 2017]  [<ffffffffa148b8fa>] send_cb+0x15a/0x180 [zfs]
[Sa Jan 14 00:48:08 2017]  [<ffffffffa148ecf9>] traverse_visitbp+0x4e9/0x840 [zfs]
[Sa Jan 14 00:48:08 2017]  [<ffffffffa03551b0>] ? spl_kmem_zalloc+0xc0/0x170 [spl]
[Sa Jan 14 00:48:08 2017]  [<ffffffffa148f7f0>] traverse_dnode+0x1d0/0x1f0 [zfs]
[Sa Jan 14 00:48:08 2017]  [<ffffffffa148eecb>] traverse_visitbp+0x6bb/0x840 [zfs]
[Sa Jan 14 00:48:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:48:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:48:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:48:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:48:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:48:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:48:08 2017]  [<ffffffffa148f6d1>] traverse_dnode+0xb1/0x1f0 [zfs]
[Sa Jan 14 00:48:08 2017]  [<ffffffffa148ef95>] traverse_visitbp+0x785/0x840 [zfs]
[Sa Jan 14 00:48:08 2017]  [<ffffffffa148f221>] traverse_impl+0x1d1/0x450 [zfs]
[Sa Jan 14 00:48:08 2017]  [<ffffffffa148b7a0>] ? dmu_send_impl+0x1690/0x1690 [zfs]
[Sa Jan 14 00:48:08 2017]  [<ffffffffa148f865>] traverse_dataset_resume+0x55/0x60 [zfs]
[Sa Jan 14 00:48:08 2017]  [<ffffffffa148b7a0>] ? dmu_send_impl+0x1690/0x1690 [zfs]
[Sa Jan 14 00:48:08 2017]  [<ffffffffa1487ab9>] send_traverse_thread+0x59/0xb0 [zfs]
[Sa Jan 14 00:48:08 2017]  [<ffffffffa1487a60>] ? byteswap_record+0x2e0/0x2e0 [zfs]
[Sa Jan 14 00:48:08 2017]  [<ffffffffa0356f01>] thread_generic_wrapper+0x71/0x80 [spl]
[Sa Jan 14 00:48:08 2017]  [<ffffffffa0356e90>] ? __thread_exit+0x20/0x20 [spl]
[Sa Jan 14 00:48:08 2017]  [<ffffffff810a5b2f>] kthread+0xcf/0xe0
[Sa Jan 14 00:48:08 2017]  [<ffffffff810b4856>] ? finish_task_switch+0x56/0x170
[Sa Jan 14 00:48:08 2017]  [<ffffffff810a5a60>] ? kthread_create_on_node+0x140/0x140
[Sa Jan 14 00:48:08 2017]  [<ffffffff81646a98>] ret_from_fork+0x58/0x90
[Sa Jan 14 00:48:08 2017]  [<ffffffff810a5a60>] ? kthread_create_on_node+0x140/0x140
[Sa Jan 14 00:50:08 2017] INFO: task send_traverse:145465 blocked for more than 120 seconds.
[Sa Jan 14 00:50:08 2017] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Sa Jan 14 00:50:08 2017] send_traverse   D 0000000000000000     0 145465      2 0x00000080
[Sa Jan 14 00:50:08 2017]  ffff881f61553690 0000000000000046 ffff880a9e173980 ffff881f61553fd8
[Sa Jan 14 00:50:08 2017]  ffff881f61553fd8 ffff881f61553fd8 ffff880a9e173980 ffff8814c33bfb98
[Sa Jan 14 00:50:08 2017]  ffff8814c33bfb68 ffff8814c33bfba0 ffff8814c33bfb90 0000000000000000
[Sa Jan 14 00:50:08 2017] Call Trace:
[Sa Jan 14 00:50:08 2017]  [<ffffffff8163bb59>] schedule+0x29/0x70
[Sa Jan 14 00:50:08 2017]  [<ffffffffa035c1ad>] cv_wait_common+0x10d/0x130 [spl]
[Sa Jan 14 00:50:08 2017]  [<ffffffff810a6b20>] ? wake_up_atomic_t+0x30/0x30
[Sa Jan 14 00:50:08 2017]  [<ffffffffa035c1e5>] __cv_wait+0x15/0x20 [spl]
[Sa Jan 14 00:50:08 2017]  [<ffffffffa147bacb>] bqueue_enqueue+0x5b/0xc0 [zfs]
[Sa Jan 14 00:50:08 2017]  [<ffffffffa148b8fa>] send_cb+0x15a/0x180 [zfs]
[Sa Jan 14 00:50:08 2017]  [<ffffffffa148ecf9>] traverse_visitbp+0x4e9/0x840 [zfs]
[Sa Jan 14 00:50:08 2017]  [<ffffffffa03551b0>] ? spl_kmem_zalloc+0xc0/0x170 [spl]
[Sa Jan 14 00:50:08 2017]  [<ffffffffa148f7f0>] traverse_dnode+0x1d0/0x1f0 [zfs]
[Sa Jan 14 00:50:08 2017]  [<ffffffffa148eecb>] traverse_visitbp+0x6bb/0x840 [zfs]
[Sa Jan 14 00:50:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:50:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:50:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:50:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:50:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:50:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:50:08 2017]  [<ffffffffa148f6d1>] traverse_dnode+0xb1/0x1f0 [zfs]
[Sa Jan 14 00:50:08 2017]  [<ffffffffa148ef95>] traverse_visitbp+0x785/0x840 [zfs]
[Sa Jan 14 00:50:08 2017]  [<ffffffffa148f221>] traverse_impl+0x1d1/0x450 [zfs]
[Sa Jan 14 00:50:08 2017]  [<ffffffffa148b7a0>] ? dmu_send_impl+0x1690/0x1690 [zfs]
[Sa Jan 14 00:50:08 2017]  [<ffffffffa148f865>] traverse_dataset_resume+0x55/0x60 [zfs]
[Sa Jan 14 00:50:08 2017]  [<ffffffffa148b7a0>] ? dmu_send_impl+0x1690/0x1690 [zfs]
[Sa Jan 14 00:50:08 2017]  [<ffffffffa1487ab9>] send_traverse_thread+0x59/0xb0 [zfs]
[Sa Jan 14 00:50:08 2017]  [<ffffffffa1487a60>] ? byteswap_record+0x2e0/0x2e0 [zfs]
[Sa Jan 14 00:50:08 2017]  [<ffffffffa0356f01>] thread_generic_wrapper+0x71/0x80 [spl]
[Sa Jan 14 00:50:08 2017]  [<ffffffffa0356e90>] ? __thread_exit+0x20/0x20 [spl]
[Sa Jan 14 00:50:08 2017]  [<ffffffff810a5b2f>] kthread+0xcf/0xe0
[Sa Jan 14 00:50:08 2017]  [<ffffffff810b4856>] ? finish_task_switch+0x56/0x170
[Sa Jan 14 00:50:08 2017]  [<ffffffff810a5a60>] ? kthread_create_on_node+0x140/0x140
[Sa Jan 14 00:50:08 2017]  [<ffffffff81646a98>] ret_from_fork+0x58/0x90
[Sa Jan 14 00:50:08 2017]  [<ffffffff810a5a60>] ? kthread_create_on_node+0x140/0x140
[Sa Jan 14 00:52:08 2017] INFO: task send_traverse:145465 blocked for more than 120 seconds.
[Sa Jan 14 00:52:08 2017] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Sa Jan 14 00:52:08 2017] send_traverse   D 0000000000000000     0 145465      2 0x00000080
[Sa Jan 14 00:52:08 2017]  ffff881f61553690 0000000000000046 ffff880a9e173980 ffff881f61553fd8
[Sa Jan 14 00:52:08 2017]  ffff881f61553fd8 ffff881f61553fd8 ffff880a9e173980 ffff8814c33bfb98
[Sa Jan 14 00:52:08 2017]  ffff8814c33bfb68 ffff8814c33bfba0 ffff8814c33bfb90 0000000000000000
[Sa Jan 14 00:52:08 2017] Call Trace:
[Sa Jan 14 00:52:08 2017]  [<ffffffff8163bb59>] schedule+0x29/0x70
[Sa Jan 14 00:52:08 2017]  [<ffffffffa035c1ad>] cv_wait_common+0x10d/0x130 [spl]
[Sa Jan 14 00:52:08 2017]  [<ffffffff810a6b20>] ? wake_up_atomic_t+0x30/0x30
[Sa Jan 14 00:52:08 2017]  [<ffffffffa035c1e5>] __cv_wait+0x15/0x20 [spl]
[Sa Jan 14 00:52:08 2017]  [<ffffffffa147bacb>] bqueue_enqueue+0x5b/0xc0 [zfs]
[Sa Jan 14 00:52:08 2017]  [<ffffffffa148b8fa>] send_cb+0x15a/0x180 [zfs]
[Sa Jan 14 00:52:08 2017]  [<ffffffffa148ecf9>] traverse_visitbp+0x4e9/0x840 [zfs]
[Sa Jan 14 00:52:08 2017]  [<ffffffffa03551b0>] ? spl_kmem_zalloc+0xc0/0x170 [spl]
[Sa Jan 14 00:52:08 2017]  [<ffffffffa148f7f0>] traverse_dnode+0x1d0/0x1f0 [zfs]
[Sa Jan 14 00:52:08 2017]  [<ffffffffa148eecb>] traverse_visitbp+0x6bb/0x840 [zfs]
[Sa Jan 14 00:52:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:52:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:52:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:52:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:52:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:52:08 2017]  [<ffffffffa148eb05>] traverse_visitbp+0x2f5/0x840 [zfs]
[Sa Jan 14 00:52:08 2017]  [<ffffffffa148f6d1>] traverse_dnode+0xb1/0x1f0 [zfs]
[Sa Jan 14 00:52:08 2017]  [<ffffffffa148ef95>] traverse_visitbp+0x785/0x840 [zfs]
[Sa Jan 14 00:52:08 2017]  [<ffffffffa148f221>] traverse_impl+0x1d1/0x450 [zfs]
[Sa Jan 14 00:52:08 2017]  [<ffffffffa148b7a0>] ? dmu_send_impl+0x1690/0x1690 [zfs]
[Sa Jan 14 00:52:08 2017]  [<ffffffffa148f865>] traverse_dataset_resume+0x55/0x60 [zfs]
[Sa Jan 14 00:52:08 2017]  [<ffffffffa148b7a0>] ? dmu_send_impl+0x1690/0x1690 [zfs]
[Sa Jan 14 00:52:08 2017]  [<ffffffffa1487ab9>] send_traverse_thread+0x59/0xb0 [zfs]
[Sa Jan 14 00:52:08 2017]  [<ffffffffa1487a60>] ? byteswap_record+0x2e0/0x2e0 [zfs]
[Sa Jan 14 00:52:08 2017]  [<ffffffffa0356f01>] thread_generic_wrapper+0x71/0x80 [spl]
[Sa Jan 14 00:52:08 2017]  [<ffffffffa0356e90>] ? __thread_exit+0x20/0x20 [spl]
[Sa Jan 14 00:52:08 2017]  [<ffffffff810a5b2f>] kthread+0xcf/0xe0
[Sa Jan 14 00:52:08 2017]  [<ffffffff810b4856>] ? finish_task_switch+0x56/0x170
[Sa Jan 14 00:52:08 2017]  [<ffffffff810a5a60>] ? kthread_create_on_node+0x140/0x140
[Sa Jan 14 00:52:08 2017]  [<ffffffff81646a98>] ret_from_fork+0x58/0x90
[Sa Jan 14 00:52:08 2017]  [<ffffffff810a5a60>] ? kthread_create_on_node+0x140/0x140
behlendorf commented 7 years ago

Closing, believed to be resolved by 03928896e114cfc4752c089727474134a2b0204b.

jgottula commented 7 years ago

@behlendorf I'm thinking this issue probably needs to be reopened, because I'm experiencing consistent, permanent send_traverse task lockups just like @waltersteinlein described above when attempting certain zfs send's on completely-up-to-date kernel and ZoL versions.

Aug 15 14:43:28 jgpc kernel: INFO: task send_traverse:4073 blocked for more than 60 seconds.
Aug 15 14:43:28 jgpc kernel:       Tainted: P        W  O    4.12.7-1 #4
Aug 15 14:43:28 jgpc kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug 15 14:43:28 jgpc kernel: send_traverse   D    0  4073      2 0x00000000
Aug 15 14:43:28 jgpc kernel: Call Trace:
Aug 15 14:43:28 jgpc kernel:  __schedule+0x21f/0x860
Aug 15 14:43:28 jgpc kernel:  schedule+0x53/0xa8
Aug 15 14:43:28 jgpc kernel:  cv_wait_common+0x10d/0x120 [spl]
Aug 15 14:43:28 jgpc kernel:  ? wake_bit_function+0x68/0x68
Aug 15 14:43:28 jgpc kernel:  __cv_wait+0x28/0x30 [spl]
Aug 15 14:43:28 jgpc kernel:  bqueue_enqueue+0x75/0xe8 [zfs]
Aug 15 14:43:28 jgpc kernel:  send_cb+0xe2/0x138 [zfs]
Aug 15 14:43:28 jgpc kernel:  traverse_visitbp+0x38d/0x9e8 [zfs]
Aug 15 14:43:28 jgpc kernel:  ? spl_kmem_alloc+0xdc/0x180 [spl]
Aug 15 14:43:28 jgpc kernel:  traverse_visitbp+0x544/0x9e8 [zfs]
Aug 15 14:43:28 jgpc kernel:  traverse_visitbp+0x544/0x9e8 [zfs]
Aug 15 14:43:28 jgpc kernel:  traverse_visitbp+0x544/0x9e8 [zfs]
Aug 15 14:43:28 jgpc kernel:  traverse_dnode+0xec/0x1a0 [zfs]
Aug 15 14:43:28 jgpc kernel:  traverse_visitbp+0x7c9/0x9e8 [zfs]
Aug 15 14:43:28 jgpc kernel:  traverse_visitbp+0x544/0x9e8 [zfs]
Aug 15 14:43:28 jgpc kernel:  traverse_visitbp+0x544/0x9e8 [zfs]
Aug 15 14:43:28 jgpc kernel:  traverse_visitbp+0x544/0x9e8 [zfs]
Aug 15 14:43:28 jgpc kernel:  traverse_visitbp+0x544/0x9e8 [zfs]
Aug 15 14:43:28 jgpc kernel:  traverse_visitbp+0x544/0x9e8 [zfs]
Aug 15 14:43:28 jgpc kernel:  traverse_visitbp+0x544/0x9e8 [zfs]
Aug 15 14:43:28 jgpc kernel:  traverse_dnode+0xec/0x1a0 [zfs]
Aug 15 14:43:28 jgpc kernel:  traverse_visitbp+0x926/0x9e8 [zfs]
Aug 15 14:43:28 jgpc kernel:  traverse_impl+0x1d1/0x448 [zfs]
Aug 15 14:43:28 jgpc kernel:  ? byteswap_record+0x2e8/0x2e8 [zfs]
Aug 15 14:43:28 jgpc kernel:  traverse_dataset_resume+0x59/0x60 [zfs]
Aug 15 14:43:28 jgpc kernel:  ? dmu_recv_existing_end+0x90/0x90 [zfs]
Aug 15 14:43:28 jgpc kernel:  send_traverse_thread+0x6c/0xc8 [zfs]
Aug 15 14:43:28 jgpc kernel:  thread_generic_wrapper+0x81/0x90 [spl]
Aug 15 14:43:28 jgpc kernel:  kthread+0x134/0x150
Aug 15 14:43:28 jgpc kernel:  ? __thread_exit+0x30/0x30 [spl]
Aug 15 14:43:28 jgpc kernel:  ? kthread_create_on_node+0x50/0x50
Aug 15 14:43:28 jgpc kernel:  ? kthread_create_on_node+0x50/0x50
Aug 15 14:43:28 jgpc kernel:  ret_from_fork+0x22/0x30

<...>

Aug 15 14:44:29 jgpc kernel: INFO: task send_traverse:4073 blocked for more than 60 seconds.
Aug 15 14:44:29 jgpc kernel:       Tainted: P        W  O    4.12.7-1 #4
Aug 15 14:44:29 jgpc kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug 15 14:44:29 jgpc kernel: send_traverse   D    0  4073      2 0x00000000
Aug 15 14:44:29 jgpc kernel: Call Trace:
Aug 15 14:44:29 jgpc kernel:  __schedule+0x21f/0x860
Aug 15 14:44:29 jgpc kernel:  schedule+0x53/0xa8
Aug 15 14:44:29 jgpc kernel:  cv_wait_common+0x10d/0x120 [spl]
Aug 15 14:44:29 jgpc kernel:  ? wake_bit_function+0x68/0x68
Aug 15 14:44:29 jgpc kernel:  __cv_wait+0x28/0x30 [spl]
Aug 15 14:44:29 jgpc kernel:  bqueue_enqueue+0x75/0xe8 [zfs]
Aug 15 14:44:29 jgpc kernel:  send_cb+0xe2/0x138 [zfs]
Aug 15 14:44:29 jgpc kernel:  traverse_visitbp+0x38d/0x9e8 [zfs]
Aug 15 14:44:29 jgpc kernel:  ? spl_kmem_alloc+0xdc/0x180 [spl]
Aug 15 14:44:29 jgpc kernel:  traverse_visitbp+0x544/0x9e8 [zfs]
Aug 15 14:44:29 jgpc kernel:  traverse_visitbp+0x544/0x9e8 [zfs]
Aug 15 14:44:29 jgpc kernel:  traverse_visitbp+0x544/0x9e8 [zfs]
Aug 15 14:44:29 jgpc kernel:  traverse_dnode+0xec/0x1a0 [zfs]
Aug 15 14:44:29 jgpc kernel:  traverse_visitbp+0x7c9/0x9e8 [zfs]
Aug 15 14:44:29 jgpc kernel:  traverse_visitbp+0x544/0x9e8 [zfs]
Aug 15 14:44:29 jgpc kernel:  traverse_visitbp+0x544/0x9e8 [zfs]
Aug 15 14:44:29 jgpc kernel:  traverse_visitbp+0x544/0x9e8 [zfs]
Aug 15 14:44:29 jgpc kernel:  traverse_visitbp+0x544/0x9e8 [zfs]
Aug 15 14:44:29 jgpc kernel:  traverse_visitbp+0x544/0x9e8 [zfs]
Aug 15 14:44:29 jgpc kernel:  traverse_visitbp+0x544/0x9e8 [zfs]
Aug 15 14:44:29 jgpc kernel:  traverse_dnode+0xec/0x1a0 [zfs]
Aug 15 14:44:29 jgpc kernel:  traverse_visitbp+0x926/0x9e8 [zfs]
Aug 15 14:44:29 jgpc kernel:  traverse_impl+0x1d1/0x448 [zfs]
Aug 15 14:44:29 jgpc kernel:  ? byteswap_record+0x2e8/0x2e8 [zfs]
Aug 15 14:44:29 jgpc kernel:  traverse_dataset_resume+0x59/0x60 [zfs]
Aug 15 14:44:29 jgpc kernel:  ? dmu_recv_existing_end+0x90/0x90 [zfs]
Aug 15 14:44:29 jgpc kernel:  send_traverse_thread+0x6c/0xc8 [zfs]
Aug 15 14:44:29 jgpc kernel:  thread_generic_wrapper+0x81/0x90 [spl]
Aug 15 14:44:29 jgpc kernel:  kthread+0x134/0x150
Aug 15 14:44:29 jgpc kernel:  ? __thread_exit+0x30/0x30 [spl]
Aug 15 14:44:29 jgpc kernel:  ? kthread_create_on_node+0x50/0x50
Aug 15 14:44:29 jgpc kernel:  ? kthread_create_on_node+0x50/0x50
Aug 15 14:44:29 jgpc kernel:  ret_from_fork+0x22/0x30

Currently I'm seeing this issue when I try to do a zfs send over the network from my machine running kernel 4.12.7 with zfsonlinux/spl@9df9692 and zfsonlinux/zfs@d49d9c2bd. But I was also experiencing it on kernel 4.11.3 with zfsonlinux/spl@ac48361 and zfsonlinux/zfs@829aaf280. (I don't know for sure about any other, earlier kernel/ZoL versions, since I hadn't been doing any notable zfs send's until just recently.)

(I also happen to be in a situation where I have a relatively urgent need to send this data off of the pool in question due to some hardware failures, so any temporary workarounds, if any are known, would be welcome.)

jgottula commented 7 years ago

Oh and I neglected to mention that the machine on the receiving end of this operation also gets stuck. It's currently running kernel 4.12.6 with zfsonlinux/spl@9df9692 and zfsonlinux/zfs@d49d9c2bd (very slightly older kernel version, but same ZoL versions, as the sending side).

Aug 15 14:45:50 helium kernel: INFO: task txg_quiesce:26727 blocked for more than 120 seconds.
Aug 15 14:45:50 helium kernel:       Tainted: P           O    4.12.6-1-ARCH #1
Aug 15 14:45:50 helium kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug 15 14:45:50 helium kernel: txg_quiesce     D    0 26727      2 0x00000000
Aug 15 14:45:50 helium kernel: Call Trace:
Aug 15 14:45:50 helium kernel:  __schedule+0x236/0x870
Aug 15 14:45:50 helium kernel:  schedule+0x3d/0x90
Aug 15 14:45:50 helium kernel:  cv_wait_common+0x11c/0x130 [spl]
Aug 15 14:45:50 helium kernel:  ? wake_bit_function+0x60/0x60
Aug 15 14:45:50 helium kernel:  __cv_wait+0x15/0x20 [spl]
Aug 15 14:45:50 helium kernel:  txg_quiesce_thread+0x2d5/0x3e0 [zfs]
Aug 15 14:45:50 helium kernel:  ? txg_wait_open+0x100/0x100 [zfs]
Aug 15 14:45:50 helium kernel:  thread_generic_wrapper+0x72/0x80 [spl]
Aug 15 14:45:50 helium kernel:  kthread+0x125/0x140
Aug 15 14:45:50 helium kernel:  ? __thread_exit+0x20/0x20 [spl]
Aug 15 14:45:50 helium kernel:  ? kthread_create_on_node+0x70/0x70
Aug 15 14:45:50 helium kernel:  ? kthread_create_on_node+0x70/0x70
Aug 15 14:45:50 helium kernel:  ret_from_fork+0x25/0x30
Aug 15 14:45:50 helium kernel: INFO: task zfs:28062 blocked for more than 120 seconds.
Aug 15 14:45:50 helium kernel:       Tainted: P           O    4.12.6-1-ARCH #1
Aug 15 14:45:50 helium kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug 15 14:45:50 helium kernel: zfs             D    0 28062  28060 0x00000000
Aug 15 14:45:50 helium kernel: Call Trace:
Aug 15 14:45:50 helium kernel:  __schedule+0x236/0x870
Aug 15 14:45:50 helium kernel:  schedule+0x3d/0x90
Aug 15 14:45:50 helium kernel:  cv_wait_common+0x11c/0x130 [spl]
Aug 15 14:45:50 helium kernel:  ? wake_bit_function+0x60/0x60
Aug 15 14:45:50 helium kernel:  __cv_wait+0x15/0x20 [spl]
Aug 15 14:45:50 helium kernel:  txg_wait_synced+0xdd/0x130 [zfs]
Aug 15 14:45:50 helium kernel:  dmu_recv_cleanup_ds+0x49/0xc0 [zfs]
Aug 15 14:45:50 helium kernel:  ? __schedule+0x23e/0x870
Aug 15 14:45:50 helium kernel:  ? __wake_up+0x44/0x50
Aug 15 14:45:50 helium kernel:  ? cv_wait_common+0xeb/0x130 [spl]
Aug 15 14:45:50 helium kernel:  ? wake_bit_function+0x60/0x60
Aug 15 14:45:50 helium kernel:  dmu_recv_stream+0x28e/0x1080 [zfs]
Aug 15 14:45:50 helium kernel:  ? kfree+0x175/0x190
Aug 15 14:45:50 helium kernel:  zfs_ioc_recv_impl+0x48d/0x10f0 [zfs]
Aug 15 14:45:50 helium kernel:  zfs_ioc_recv_new+0x31e/0x3e0 [zfs]
Aug 15 14:45:50 helium kernel:  ? try_to_merge_one_page+0x539/0x790
Aug 15 14:45:50 helium kernel:  ? spl_kmem_alloc_impl+0xd1/0x180 [spl]
Aug 15 14:45:50 helium kernel:  ? spl_vmem_alloc+0x19/0x20 [spl]
Aug 15 14:45:50 helium kernel:  ? nv_alloc_sleep_spl+0x1f/0x30 [znvpair]
Aug 15 14:45:50 helium kernel:  ? nv_mem_zalloc.isra.12+0x28/0x40 [znvpair]
Aug 15 14:45:50 helium kernel:  ? nvlist_xalloc.part.13+0x63/0xc0 [znvpair]
Aug 15 14:45:50 helium kernel:  zfsdev_ioctl+0x442/0x600 [zfs]
Aug 15 14:45:50 helium kernel:  do_vfs_ioctl+0xa5/0x600
Aug 15 14:45:50 helium kernel:  ? vfs_write+0x166/0x1a0
Aug 15 14:45:50 helium kernel:  SyS_ioctl+0x79/0x90
Aug 15 14:45:50 helium kernel:  entry_SYSCALL_64_fastpath+0x1a/0xa5
Aug 15 14:45:50 helium kernel: RIP: 0033:0x7f980d6948b7
Aug 15 14:45:50 helium kernel: RSP: 002b:00007ffc6613f668 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Aug 15 14:45:50 helium kernel: RAX: ffffffffffffffda RBX: 00000050ef3e9370 RCX: 00007f980d6948b7
Aug 15 14:45:50 helium kernel: RDX: 00007ffc6613f680 RSI: 0000000000005a46 RDI: 0000000000000005
Aug 15 14:45:50 helium kernel: RBP: 00000050ef3e9348 R08: 0000000000000003 R09: 000000000000007c
Aug 15 14:45:50 helium kernel: R10: 0000000000000830 R11: 0000000000000246 R12: 00000050ef3e5540
Aug 15 14:45:50 helium kernel: R13: 00000050ef160080 R14: 0000000000000002 R15: 00000050ef3e93c0

<...>

Aug 15 14:47:53 helium kernel: INFO: task txg_quiesce:26727 blocked for more than 120 seconds.
Aug 15 14:47:53 helium kernel:       Tainted: P           O    4.12.6-1-ARCH #1
Aug 15 14:47:53 helium kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug 15 14:47:53 helium kernel: txg_quiesce     D    0 26727      2 0x00000000
Aug 15 14:47:53 helium kernel: Call Trace:
Aug 15 14:47:53 helium kernel:  __schedule+0x236/0x870
Aug 15 14:47:53 helium kernel:  schedule+0x3d/0x90
Aug 15 14:47:53 helium kernel:  cv_wait_common+0x11c/0x130 [spl]
Aug 15 14:47:53 helium kernel:  ? wake_bit_function+0x60/0x60
Aug 15 14:47:53 helium kernel:  __cv_wait+0x15/0x20 [spl]
Aug 15 14:47:53 helium kernel:  txg_quiesce_thread+0x2d5/0x3e0 [zfs]
Aug 15 14:47:53 helium kernel:  ? txg_wait_open+0x100/0x100 [zfs]
Aug 15 14:47:53 helium kernel:  thread_generic_wrapper+0x72/0x80 [spl]
Aug 15 14:47:53 helium kernel:  kthread+0x125/0x140
Aug 15 14:47:53 helium kernel:  ? __thread_exit+0x20/0x20 [spl]
Aug 15 14:47:53 helium kernel:  ? kthread_create_on_node+0x70/0x70
Aug 15 14:47:53 helium kernel:  ? kthread_create_on_node+0x70/0x70
Aug 15 14:47:53 helium kernel:  ret_from_fork+0x25/0x30
Aug 15 14:47:53 helium kernel: INFO: task zfs:28062 blocked for more than 120 seconds.
Aug 15 14:47:53 helium kernel:       Tainted: P           O    4.12.6-1-ARCH #1
Aug 15 14:47:53 helium kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug 15 14:47:53 helium kernel: zfs             D    0 28062  28060 0x00000000
Aug 15 14:47:53 helium kernel: Call Trace:
Aug 15 14:47:53 helium kernel:  __schedule+0x236/0x870
Aug 15 14:47:53 helium kernel:  schedule+0x3d/0x90
Aug 15 14:47:53 helium kernel:  cv_wait_common+0x11c/0x130 [spl]
Aug 15 14:47:53 helium kernel:  ? wake_bit_function+0x60/0x60
Aug 15 14:47:53 helium kernel:  __cv_wait+0x15/0x20 [spl]
Aug 15 14:47:53 helium kernel:  txg_wait_synced+0xdd/0x130 [zfs]
Aug 15 14:47:53 helium kernel:  dmu_recv_cleanup_ds+0x49/0xc0 [zfs]
Aug 15 14:47:53 helium kernel:  ? __schedule+0x23e/0x870
Aug 15 14:47:53 helium kernel:  ? __wake_up+0x44/0x50
Aug 15 14:47:53 helium kernel:  ? cv_wait_common+0xeb/0x130 [spl]
Aug 15 14:47:53 helium kernel:  ? wake_bit_function+0x60/0x60
Aug 15 14:47:53 helium kernel:  dmu_recv_stream+0x28e/0x1080 [zfs]
Aug 15 14:47:53 helium kernel:  ? kfree+0x175/0x190
Aug 15 14:47:53 helium kernel:  zfs_ioc_recv_impl+0x48d/0x10f0 [zfs]
Aug 15 14:47:53 helium kernel:  zfs_ioc_recv_new+0x31e/0x3e0 [zfs]
Aug 15 14:47:53 helium kernel:  ? try_to_merge_one_page+0x539/0x790
Aug 15 14:47:53 helium kernel:  ? spl_kmem_alloc_impl+0xd1/0x180 [spl]
Aug 15 14:47:53 helium kernel:  ? spl_vmem_alloc+0x19/0x20 [spl]
Aug 15 14:47:53 helium kernel:  ? nv_alloc_sleep_spl+0x1f/0x30 [znvpair]
Aug 15 14:47:53 helium kernel:  ? nv_mem_zalloc.isra.12+0x28/0x40 [znvpair]
Aug 15 14:47:53 helium kernel:  ? nvlist_xalloc.part.13+0x63/0xc0 [znvpair]
Aug 15 14:47:53 helium kernel:  zfsdev_ioctl+0x442/0x600 [zfs]
Aug 15 14:47:53 helium kernel:  do_vfs_ioctl+0xa5/0x600
Aug 15 14:47:53 helium kernel:  ? vfs_write+0x166/0x1a0
Aug 15 14:47:53 helium kernel:  SyS_ioctl+0x79/0x90
Aug 15 14:47:53 helium kernel:  entry_SYSCALL_64_fastpath+0x1a/0xa5
Aug 15 14:47:53 helium kernel: RIP: 0033:0x7f980d6948b7
Aug 15 14:47:53 helium kernel: RSP: 002b:00007ffc6613f668 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Aug 15 14:47:53 helium kernel: RAX: ffffffffffffffda RBX: 00000050ef3e9370 RCX: 00007f980d6948b7
Aug 15 14:47:53 helium kernel: RDX: 00007ffc6613f680 RSI: 0000000000005a46 RDI: 0000000000000005
Aug 15 14:47:53 helium kernel: RBP: 00000050ef3e9348 R08: 0000000000000003 R09: 000000000000007c
Aug 15 14:47:53 helium kernel: R10: 0000000000000830 R11: 0000000000000246 R12: 00000050ef3e5540
Aug 15 14:47:53 helium kernel: R13: 00000050ef160080 R14: 0000000000000002 R15: 00000050ef3e93c0

<...>

Aug 15 14:49:56 helium kernel: INFO: task txg_quiesce:26727 blocked for more than 120 seconds.
Aug 15 14:49:56 helium kernel:       Tainted: P           O    4.12.6-1-ARCH #1
Aug 15 14:49:56 helium kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug 15 14:49:56 helium kernel: txg_quiesce     D    0 26727      2 0x00000000
Aug 15 14:49:56 helium kernel: Call Trace:
Aug 15 14:49:56 helium kernel:  __schedule+0x236/0x870
Aug 15 14:49:56 helium kernel:  schedule+0x3d/0x90
Aug 15 14:49:56 helium kernel:  cv_wait_common+0x11c/0x130 [spl]
Aug 15 14:49:56 helium kernel:  ? wake_bit_function+0x60/0x60
Aug 15 14:49:56 helium kernel:  __cv_wait+0x15/0x20 [spl]
Aug 15 14:49:56 helium kernel:  txg_quiesce_thread+0x2d5/0x3e0 [zfs]
Aug 15 14:49:56 helium kernel:  ? txg_wait_open+0x100/0x100 [zfs]
Aug 15 14:49:56 helium kernel:  thread_generic_wrapper+0x72/0x80 [spl]
Aug 15 14:49:56 helium kernel:  kthread+0x125/0x140
Aug 15 14:49:56 helium kernel:  ? __thread_exit+0x20/0x20 [spl]
Aug 15 14:49:56 helium kernel:  ? kthread_create_on_node+0x70/0x70
Aug 15 14:49:56 helium kernel:  ? kthread_create_on_node+0x70/0x70
Aug 15 14:49:56 helium kernel:  ret_from_fork+0x25/0x30
Aug 15 14:49:56 helium kernel: INFO: task zfs:28062 blocked for more than 120 seconds.
Aug 15 14:49:56 helium kernel:       Tainted: P           O    4.12.6-1-ARCH #1
Aug 15 14:49:56 helium kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug 15 14:49:56 helium kernel: zfs             D    0 28062  28060 0x00000000
Aug 15 14:49:56 helium kernel: Call Trace:
Aug 15 14:49:56 helium kernel:  __schedule+0x236/0x870
Aug 15 14:49:56 helium kernel:  schedule+0x3d/0x90
Aug 15 14:49:56 helium kernel:  cv_wait_common+0x11c/0x130 [spl]
Aug 15 14:49:56 helium kernel:  ? wake_bit_function+0x60/0x60
Aug 15 14:49:56 helium kernel:  __cv_wait+0x15/0x20 [spl]
Aug 15 14:49:56 helium kernel:  txg_wait_synced+0xdd/0x130 [zfs]
Aug 15 14:49:56 helium kernel:  dmu_recv_cleanup_ds+0x49/0xc0 [zfs]
Aug 15 14:49:56 helium kernel:  ? __schedule+0x23e/0x870
Aug 15 14:49:56 helium kernel:  ? __wake_up+0x44/0x50
Aug 15 14:49:56 helium kernel:  ? cv_wait_common+0xeb/0x130 [spl]
Aug 15 14:49:56 helium kernel:  ? wake_bit_function+0x60/0x60
Aug 15 14:49:56 helium kernel:  dmu_recv_stream+0x28e/0x1080 [zfs]
Aug 15 14:49:56 helium kernel:  ? kfree+0x175/0x190
Aug 15 14:49:56 helium kernel:  zfs_ioc_recv_impl+0x48d/0x10f0 [zfs]
Aug 15 14:49:56 helium kernel:  zfs_ioc_recv_new+0x31e/0x3e0 [zfs]
Aug 15 14:49:56 helium kernel:  ? try_to_merge_one_page+0x539/0x790
Aug 15 14:49:56 helium kernel:  ? spl_kmem_alloc_impl+0xd1/0x180 [spl]
Aug 15 14:49:56 helium kernel:  ? spl_vmem_alloc+0x19/0x20 [spl]
Aug 15 14:49:56 helium kernel:  ? nv_alloc_sleep_spl+0x1f/0x30 [znvpair]
Aug 15 14:49:56 helium kernel:  ? nv_mem_zalloc.isra.12+0x28/0x40 [znvpair]
Aug 15 14:49:56 helium kernel:  ? nvlist_xalloc.part.13+0x63/0xc0 [znvpair]
Aug 15 14:49:56 helium kernel:  zfsdev_ioctl+0x442/0x600 [zfs]
Aug 15 14:49:56 helium kernel:  do_vfs_ioctl+0xa5/0x600
Aug 15 14:49:56 helium kernel:  ? vfs_write+0x166/0x1a0
Aug 15 14:49:56 helium kernel:  SyS_ioctl+0x79/0x90
Aug 15 14:49:56 helium kernel:  entry_SYSCALL_64_fastpath+0x1a/0xa5
Aug 15 14:49:56 helium kernel: RIP: 0033:0x7f980d6948b7
Aug 15 14:49:56 helium kernel: RSP: 002b:00007ffc6613f668 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Aug 15 14:49:56 helium kernel: RAX: ffffffffffffffda RBX: 00000050ef3e9370 RCX: 00007f980d6948b7
Aug 15 14:49:56 helium kernel: RDX: 00007ffc6613f680 RSI: 0000000000005a46 RDI: 0000000000000005
Aug 15 14:49:56 helium kernel: RBP: 00000050ef3e9348 R08: 0000000000000003 R09: 000000000000007c
Aug 15 14:49:56 helium kernel: R10: 0000000000000830 R11: 0000000000000246 R12: 00000050ef3e5540
Aug 15 14:49:56 helium kernel: R13: 00000050ef160080 R14: 0000000000000002 R15: 00000050ef3e93c0

<...>

Aug 15 14:51:58 helium kernel: INFO: task txg_quiesce:26727 blocked for more than 120 seconds.
Aug 15 14:51:58 helium kernel:       Tainted: P           O    4.12.6-1-ARCH #1
Aug 15 14:51:58 helium kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug 15 14:51:58 helium kernel: txg_quiesce     D    0 26727      2 0x00000000
Aug 15 14:51:58 helium kernel: Call Trace:
Aug 15 14:51:58 helium kernel:  __schedule+0x236/0x870
Aug 15 14:51:58 helium kernel:  schedule+0x3d/0x90
Aug 15 14:51:58 helium kernel:  cv_wait_common+0x11c/0x130 [spl]
Aug 15 14:51:58 helium kernel:  ? wake_bit_function+0x60/0x60
Aug 15 14:51:58 helium kernel:  __cv_wait+0x15/0x20 [spl]
Aug 15 14:51:58 helium kernel:  txg_quiesce_thread+0x2d5/0x3e0 [zfs]
Aug 15 14:51:58 helium kernel:  ? txg_wait_open+0x100/0x100 [zfs]
Aug 15 14:51:58 helium kernel:  thread_generic_wrapper+0x72/0x80 [spl]
Aug 15 14:51:58 helium kernel:  kthread+0x125/0x140
Aug 15 14:51:58 helium kernel:  ? __thread_exit+0x20/0x20 [spl]
Aug 15 14:51:58 helium kernel:  ? kthread_create_on_node+0x70/0x70
Aug 15 14:51:58 helium kernel:  ? kthread_create_on_node+0x70/0x70
Aug 15 14:51:58 helium kernel:  ret_from_fork+0x25/0x30
Aug 15 14:51:58 helium kernel: INFO: task zfs:28062 blocked for more than 120 seconds.
Aug 15 14:51:58 helium kernel:       Tainted: P           O    4.12.6-1-ARCH #1
Aug 15 14:51:58 helium kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug 15 14:51:58 helium kernel: zfs             D    0 28062  28060 0x00000000
Aug 15 14:51:58 helium kernel: Call Trace:
Aug 15 14:51:58 helium kernel:  __schedule+0x236/0x870
Aug 15 14:51:58 helium kernel:  schedule+0x3d/0x90
Aug 15 14:51:58 helium kernel:  cv_wait_common+0x11c/0x130 [spl]
Aug 15 14:51:58 helium kernel:  ? wake_bit_function+0x60/0x60
Aug 15 14:51:58 helium kernel:  __cv_wait+0x15/0x20 [spl]
Aug 15 14:51:58 helium kernel:  txg_wait_synced+0xdd/0x130 [zfs]
Aug 15 14:51:58 helium kernel:  dmu_recv_cleanup_ds+0x49/0xc0 [zfs]
Aug 15 14:51:58 helium kernel:  ? __schedule+0x23e/0x870
Aug 15 14:51:58 helium kernel:  ? __wake_up+0x44/0x50
Aug 15 14:51:58 helium kernel:  ? cv_wait_common+0xeb/0x130 [spl]
Aug 15 14:51:58 helium kernel:  ? wake_bit_function+0x60/0x60
Aug 15 14:51:58 helium kernel:  dmu_recv_stream+0x28e/0x1080 [zfs]
Aug 15 14:51:58 helium kernel:  ? kfree+0x175/0x190
Aug 15 14:51:58 helium kernel:  zfs_ioc_recv_impl+0x48d/0x10f0 [zfs]
Aug 15 14:51:58 helium kernel:  zfs_ioc_recv_new+0x31e/0x3e0 [zfs]
Aug 15 14:51:58 helium kernel:  ? try_to_merge_one_page+0x539/0x790
Aug 15 14:51:58 helium kernel:  ? spl_kmem_alloc_impl+0xd1/0x180 [spl]
Aug 15 14:51:58 helium kernel:  ? spl_vmem_alloc+0x19/0x20 [spl]
Aug 15 14:51:58 helium kernel:  ? nv_alloc_sleep_spl+0x1f/0x30 [znvpair]
Aug 15 14:51:58 helium kernel:  ? nv_mem_zalloc.isra.12+0x28/0x40 [znvpair]
Aug 15 14:51:58 helium kernel:  ? nvlist_xalloc.part.13+0x63/0xc0 [znvpair]
Aug 15 14:51:58 helium kernel:  zfsdev_ioctl+0x442/0x600 [zfs]
Aug 15 14:51:58 helium kernel:  do_vfs_ioctl+0xa5/0x600
Aug 15 14:51:58 helium kernel:  ? vfs_write+0x166/0x1a0
Aug 15 14:51:58 helium kernel:  SyS_ioctl+0x79/0x90
Aug 15 14:51:58 helium kernel:  entry_SYSCALL_64_fastpath+0x1a/0xa5
Aug 15 14:51:58 helium kernel: RIP: 0033:0x7f980d6948b7
Aug 15 14:51:58 helium kernel: RSP: 002b:00007ffc6613f668 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Aug 15 14:51:58 helium kernel: RAX: ffffffffffffffda RBX: 00000050ef3e9370 RCX: 00007f980d6948b7
Aug 15 14:51:58 helium kernel: RDX: 00007ffc6613f680 RSI: 0000000000005a46 RDI: 0000000000000005
Aug 15 14:51:58 helium kernel: RBP: 00000050ef3e9348 R08: 0000000000000003 R09: 000000000000007c
Aug 15 14:51:58 helium kernel: R10: 0000000000000830 R11: 0000000000000246 R12: 00000050ef3e5540
Aug 15 14:51:58 helium kernel: R13: 00000050ef160080 R14: 0000000000000002 R15: 00000050ef3e93c0

<...>

Aug 15 14:54:01 helium kernel: INFO: task txg_quiesce:26727 blocked for more than 120 seconds.
Aug 15 14:54:01 helium kernel:       Tainted: P           O    4.12.6-1-ARCH #1
Aug 15 14:54:01 helium kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug 15 14:54:01 helium kernel: txg_quiesce     D    0 26727      2 0x00000000
Aug 15 14:54:01 helium kernel: Call Trace:
Aug 15 14:54:01 helium kernel:  __schedule+0x236/0x870
Aug 15 14:54:01 helium kernel:  schedule+0x3d/0x90
Aug 15 14:54:01 helium kernel:  cv_wait_common+0x11c/0x130 [spl]
Aug 15 14:54:01 helium kernel:  ? wake_bit_function+0x60/0x60
Aug 15 14:54:01 helium kernel:  __cv_wait+0x15/0x20 [spl]
Aug 15 14:54:01 helium kernel:  txg_quiesce_thread+0x2d5/0x3e0 [zfs]
Aug 15 14:54:01 helium kernel:  ? txg_wait_open+0x100/0x100 [zfs]
Aug 15 14:54:01 helium kernel:  thread_generic_wrapper+0x72/0x80 [spl]
Aug 15 14:54:01 helium kernel:  kthread+0x125/0x140
Aug 15 14:54:01 helium kernel:  ? __thread_exit+0x20/0x20 [spl]
Aug 15 14:54:01 helium kernel:  ? kthread_create_on_node+0x70/0x70
Aug 15 14:54:01 helium kernel:  ? kthread_create_on_node+0x70/0x70
Aug 15 14:54:01 helium kernel:  ret_from_fork+0x25/0x30
Aug 15 14:54:01 helium kernel: INFO: task zfs:28062 blocked for more than 120 seconds.
Aug 15 14:54:01 helium kernel:       Tainted: P           O    4.12.6-1-ARCH #1
Aug 15 14:54:01 helium kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug 15 14:54:01 helium kernel: zfs             D    0 28062  28060 0x00000000
Aug 15 14:54:01 helium kernel: Call Trace:
Aug 15 14:54:01 helium kernel:  __schedule+0x236/0x870
Aug 15 14:54:01 helium kernel:  schedule+0x3d/0x90
Aug 15 14:54:01 helium kernel:  cv_wait_common+0x11c/0x130 [spl]
Aug 15 14:54:01 helium kernel:  ? wake_bit_function+0x60/0x60
Aug 15 14:54:01 helium kernel:  __cv_wait+0x15/0x20 [spl]
Aug 15 14:54:01 helium kernel:  txg_wait_synced+0xdd/0x130 [zfs]
Aug 15 14:54:01 helium kernel:  dmu_recv_cleanup_ds+0x49/0xc0 [zfs]
Aug 15 14:54:01 helium kernel:  ? __schedule+0x23e/0x870
Aug 15 14:54:01 helium kernel:  ? __wake_up+0x44/0x50
Aug 15 14:54:01 helium kernel:  ? cv_wait_common+0xeb/0x130 [spl]
Aug 15 14:54:01 helium kernel:  ? wake_bit_function+0x60/0x60
Aug 15 14:54:01 helium kernel:  dmu_recv_stream+0x28e/0x1080 [zfs]
Aug 15 14:54:01 helium kernel:  ? kfree+0x175/0x190
Aug 15 14:54:01 helium kernel:  zfs_ioc_recv_impl+0x48d/0x10f0 [zfs]
Aug 15 14:54:01 helium kernel:  zfs_ioc_recv_new+0x31e/0x3e0 [zfs]
Aug 15 14:54:01 helium kernel:  ? try_to_merge_one_page+0x539/0x790
Aug 15 14:54:01 helium kernel:  ? spl_kmem_alloc_impl+0xd1/0x180 [spl]
Aug 15 14:54:01 helium kernel:  ? spl_vmem_alloc+0x19/0x20 [spl]
Aug 15 14:54:01 helium kernel:  ? nv_alloc_sleep_spl+0x1f/0x30 [znvpair]
Aug 15 14:54:01 helium kernel:  ? nv_mem_zalloc.isra.12+0x28/0x40 [znvpair]
Aug 15 14:54:01 helium kernel:  ? nvlist_xalloc.part.13+0x63/0xc0 [znvpair]
Aug 15 14:54:01 helium kernel:  zfsdev_ioctl+0x442/0x600 [zfs]
Aug 15 14:54:01 helium kernel:  do_vfs_ioctl+0xa5/0x600
Aug 15 14:54:01 helium kernel:  ? vfs_write+0x166/0x1a0
Aug 15 14:54:01 helium kernel:  SyS_ioctl+0x79/0x90
Aug 15 14:54:01 helium kernel:  entry_SYSCALL_64_fastpath+0x1a/0xa5
Aug 15 14:54:01 helium kernel: RIP: 0033:0x7f980d6948b7
Aug 15 14:54:01 helium kernel: RSP: 002b:00007ffc6613f668 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Aug 15 14:54:01 helium kernel: RAX: ffffffffffffffda RBX: 00000050ef3e9370 RCX: 00007f980d6948b7
Aug 15 14:54:01 helium kernel: RDX: 00007ffc6613f680 RSI: 0000000000005a46 RDI: 0000000000000005
Aug 15 14:54:01 helium kernel: RBP: 00000050ef3e9348 R08: 0000000000000003 R09: 000000000000007c
Aug 15 14:54:01 helium kernel: R10: 0000000000000830 R11: 0000000000000246 R12: 00000050ef3e5540
Aug 15 14:54:01 helium kernel: R13: 00000050ef160080 R14: 0000000000000002 R15: 00000050ef3e93c0

The zfs send process on the sending side was able to be ctrl+C'd out of after a short delay; however the zfs recv process on the receiving side just won't die, no matter what I do to it (e.g. SIGKILL), and no matter how long I wait. Just stuck in that ioctl syscall eternally, I guess.

Note that I've tried the receive side of this operation on two different machines (same kernel/ZoL versions and drives however), with the same results.

Here's the exact command I've been executing (on the sending machine) to do the transfer:

zfs send -R -L -c -v pool@migration1-20170814-2010PDT | lz4 - | ssh helium 'lz4 -d - | zfs recv -s -o checksum=sha512 -d -F -v lifeboat/pool'

So now I can't even tell for sure whether this is a send problem or a receive problem.

behlendorf commented 7 years ago

Reopening.

jgottula commented 7 years ago

I've managed to narrow things down somewhat, by sending snaps one-by-one and removing some of the command line options and such.

I've found that the receiving end always locks up if the snapshot it's receiving is from a particular point in time or onward.

So while these work just fine:

<...>
zfs send -p -v -i auto_2017-05-01-0700_monthly pool@auto_2017-06-01-0700_monthly | ssh helium 'zfs recv -v lifeboat/pool'
zfs send -p -v -i auto_2017-06-01-0700_monthly pool@auto_2017-06-07-1933_monthly | ssh helium 'zfs recv -v lifeboat/pool'

Any of these will cause a lockup:

zfs send -p -v -i auto_2017-06-07-1933_monthly pool@auto_2017-06-26-0700_weekly  | ssh helium 'zfs recv -v lifeboat/pool'
zfs send -p -v -i auto_2017-06-07-1933_monthly pool@auto_2017-07-01-0700_monthly | ssh helium 'zfs recv -v lifeboat/pool'
<...>

(Based on the following send-side snapshot list:)

<...>
pool@auto_2017-04-01-0700_monthly   128K      -   174K  -
pool@auto_2017-05-01-0700_monthly   209K      -   291K  -
pool@auto_2017-06-01-0700_monthly   140K      -   256K  -
pool@auto_2017-06-07-1933_monthly   140K      -   256K  -
pool@auto_2017-06-26-0700_weekly      0B      -   256K  -
pool@auto_2017-07-01-0700_monthly     0B      -   256K  -
pool@auto_2017-07-03-0700_weekly      0B      -   256K  -
pool@auto_2017-07-10-0700_weekly      0B      -   256K  -
<...>

And this top-level pool filesystem stores nothing except sub-filesystems; uses no encryption; etc.

I'm slightly suspicious of the encryption support commit, i.e. zfsonlinux/zfs@b52563034230b35f0562b6f40ad1a00f02bd9a05, since it's such a big commit that touches an awful lot of aspects of the codebase including send/receive (such as, e.g., changing in which cases the call to txg_wait_synced is done in dmu_recv_cleanup_ds; that's one of the relevant parts of the lockup, as far as I can tell from the stack traces I posted earlier).

But I don't know for sure if I'd be able to downgrade the receive-side ZoL version to test whether things work better before that commit, since I've already upgraded the send-side pool version/features to the very latest and I think that would break send/recv compat? Or maybe it wouldn't, since I haven't actually used encryption at all on the pool...? Not totally clear on that.

jgottula commented 7 years ago

I just checked my package manager history on the sending side (where the pool has been living its life), and it appears that pretty damn close to the time when the pool@auto_2017-06-07-1933_monthly snapshot was created (not 100% sure if this was right before or right after), I upgraded from kernel 4.10.3 with zfs zfsonlinux/zfs@a3478c074 and spl zfsonlinux/spl@e0aacd9, to kernel 4.11.3 with zfs zfsonlinux/zfs@829aaf280 and spl zfsonlinux/spl@ac48361. And then I've been running with the latter set of versions on the machine until literally yesterday.

So that might be relevant in some way; that software upgrade coincides with the "good snaps" / "bad snaps" boundary pretty nicely.

trisk commented 7 years ago

Can you check if the send side still encounters a problem if the stream for those snapshots is written to /dev/null? The producer thread is waiting for the bqueue to be drained by dmu_send_impl, which might just be stuck waiting for the receive side to consume the stream.

pruiz commented 6 years ago

I think I am facing the same issue here with CentOS 7.4 (3.10.0-693.21.1.el7.x86_64), using zfs-0.7.6-1.el7_4.x86_64. It happened too with 0.7.4 (upgraded to 0.7.6 hopping for a fix).

A zfs send -I old new + ssh zfs recv, hangs on recv side. I've narrowed it to recv, as writing send stream to file and applying it with 'zfs receive -v -d -F TANK/Stores/nas < /data/017454.snap, ends up on "zfs recv" hanging too.

The snap stream is over 12GB, but it hangs nearly after having read ~2GB, at this point, dmesg shows a few repetitions of:

[Tue Mar 20 01:11:51 2018] INFO: task zfs:15938 blocked for more than 120 seconds.
[Tue Mar 20 01:11:51 2018] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Tue Mar 20 01:11:51 2018] zfs             D ffff8800d8668000     0 15938  30898 0x00000080
[Tue Mar 20 01:11:51 2018] Call Trace:
[Tue Mar 20 01:11:51 2018]  [<ffffffff816b40e9>] schedule+0x29/0x70
[Tue Mar 20 01:11:51 2018]  [<ffffffffc059d515>] cv_wait_common+0x125/0x150 [spl]
[Tue Mar 20 01:11:51 2018]  [<ffffffff810b4fc0>] ? wake_up_atomic_t+0x30/0x30
[Tue Mar 20 01:11:51 2018]  [<ffffffffc059d555>] __cv_wait+0x15/0x20 [spl]
[Tue Mar 20 01:11:51 2018]  [<ffffffffc07e156b>] bqueue_enqueue+0x6b/0xe0 [zfs]
[Tue Mar 20 01:11:51 2018]  [<ffffffffc07f5b8b>] dmu_recv_stream+0x67b/0xec0 [zfs]
[Tue Mar 20 01:11:51 2018]  [<ffffffffc0878489>] zfs_ioc_recv_impl+0x2d9/0x1100 [zfs]
[Tue Mar 20 01:11:51 2018]  [<ffffffff816b2192>] ? mutex_lock+0x12/0x2f
[Tue Mar 20 01:11:51 2018]  [<ffffffffc07d5601>] ? arc_buf_access+0x141/0x270 [zfs]
[Tue Mar 20 01:11:51 2018]  [<ffffffffc0879485>] zfs_ioc_recv+0x1d5/0x2d0 [zfs]
[Tue Mar 20 01:11:51 2018]  [<ffffffff812bf47e>] ? selinux_capable+0x2e/0x40
[Tue Mar 20 01:11:51 2018]  [<ffffffff812b71ae>] ? security_capable+0x1e/0x20
[Tue Mar 20 01:11:51 2018]  [<ffffffff81098149>] ? ns_capable+0x29/0x50
[Tue Mar 20 01:11:51 2018]  [<ffffffff81098187>] ? capable+0x17/0x20
[Tue Mar 20 01:11:51 2018]  [<ffffffffc082591a>] ? priv_policy.isra.1+0x3a/0x50 [zfs]
[Tue Mar 20 01:11:51 2018]  [<ffffffffc0825b58>] ? secpolicy_zfs+0x18/0x20 [zfs]
[Tue Mar 20 01:11:51 2018]  [<ffffffff811e4e55>] ? __kmalloc+0x55/0x230
[Tue Mar 20 01:11:51 2018]  [<ffffffffc0595f0f>] ? strdup+0x3f/0x60 [spl]
[Tue Mar 20 01:11:51 2018]  [<ffffffffc0876a56>] zfsdev_ioctl+0x606/0x650 [zfs]
[Tue Mar 20 01:11:51 2018]  [<ffffffff81219e90>] do_vfs_ioctl+0x350/0x560
[Tue Mar 20 01:11:51 2018]  [<ffffffff812bec3f>] ? file_has_perm+0x9f/0xb0
[Tue Mar 20 01:11:51 2018]  [<ffffffff8121a141>] SyS_ioctl+0xa1/0xc0
[Tue Mar 20 01:11:51 2018]  [<ffffffff816c0655>] ? system_call_after_swapgs+0xa2/0x146
[Tue Mar 20 01:11:51 2018]  [<ffffffff816c0715>] system_call_fastpath+0x1c/0x21
[Tue Mar 20 01:11:51 2018]  [<ffffffff816c0661>] ? system_call_after_swapgs+0xae/0x146

And from this point onwards, I/O stops (even at zpool iostat) while CPU usage (by kernel) keeps spinning. Also, I am unable to kill 'zfs recv' process, no matter what signal I may try.

And a cat /proc/$zfs-recv-pid/stack show the following:

[<ffffffffc059d515>] cv_wait_common+0x125/0x150 [spl]
[<ffffffffc059d555>] __cv_wait+0x15/0x20 [spl]
[<ffffffffc07e156b>] bqueue_enqueue+0x6b/0xe0 [zfs]
[<ffffffffc07f5b8b>] dmu_recv_stream+0x67b/0xec0 [zfs]
[<ffffffffc0878489>] zfs_ioc_recv_impl+0x2d9/0x1100 [zfs]
[<ffffffffc0879485>] zfs_ioc_recv+0x1d5/0x2d0 [zfs]
[<ffffffffc0876a56>] zfsdev_ioctl+0x606/0x650 [zfs]
[<ffffffff81219e90>] do_vfs_ioctl+0x350/0x560
[<ffffffff8121a141>] SyS_ioctl+0xa1/0xc0
[<ffffffff816c0715>] system_call_fastpath+0x1c/0x21
[<ffffffffffffffff>] 0xffffffffffffffff