openzfs / zfs

OpenZFS on Linux and FreeBSD
https://openzfs.github.io/openzfs-docs
Other
10.66k stars 1.76k forks source link

spl_panic when receiving encrypted dataset #6821

Closed sjau closed 6 years ago

sjau commented 7 years ago

System information

Type Version/Name
Distribution Name Nixos
Distribution Version Nixos Unstable Small
Linux Kernel 4.9.58
Architecture x86_64
ZFS Version 0.7.0-1
SPL Version 0.7.0-1

Describe the problem you're observing

I'm trying to backup encrypted datasets from my notebook to my homeserver. Both run same Nixos version. However it doesn't work.

When I use -wR options for full dataset sending, then on the receiving end spl_panic appears and it never finishes.

If I omit the -R option, then full dataset sending works. However when I then try send an incremental set, dame things happens again - spl_panic

Describe how to reproduce the problem

On notebook I have: tank/encZFS/Nixos -> encZFS was create this way: zfs create -o encryption=aes-256-gcm -o keyformat=passphrase -o mountpoint=none -o atime=off ${zfsPool}/encZFS

On the server I have: serviTank/BU/subi -> None of those is encrypted

I then took a snapshot and tried to send like this:

zfs send -wR tank/encZFS/Nixos@encZFSSend_2017-11-04_12-31 | ssh root@10.0.0.3 'zfs receive serviTank/BU/subi/Nixos'

It seems all was transferred correctly:

zfs list
NAME                       USED  AVAIL  REFER  MOUNTPOINT
serviTank                  137G  3.38T    96K  /serviTank
serviTank/BU              95.8G  3.38T    96K  none
serviTank/BU/subi         95.8G  3.38T    96K  none
serviTank/BU/subi/Nixos   95.8G  3.38T  95.8G  legacy
serviTank/encZFS          40.8G  3.38T  1.39M  none
serviTank/encZFS/BU       2.78M  3.38T  1.39M  none
serviTank/encZFS/BU/subi  1.39M  3.38T  1.39M  none
serviTank/encZFS/Nixos    40.8G  3.38T  5.64G  legacy

However the zfs send/receive command never "finishes" and on the server side dmesg shows spl_panic

[ 1556.014734] VERIFY3(0 == dmu_object_dirty_raw(os, object, tx)) failed (0 == 17)
[ 1556.014757] PANIC at dmu.c:937:dmu_free_long_object_impl()
[ 1556.014770] Showing stack for process 18808
[ 1556.014772] CPU: 5 PID: 18808 Comm: receive_writer Tainted: P           O    4.9.58 #1-NixOS
[ 1556.014772] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./Z97 Pro4, BIOS P2.50 05/27/2016
[ 1556.014773]  ffffaf4e11137b38 ffffffff942f7a12 ffffffffc02c4e03 00000000000003a9
[ 1556.014775]  ffffaf4e11137b48 ffffffffc00759f2 ffffaf4e11137cd0 ffffffffc0075ac5
[ 1556.014776]  0000000000000000 ffffaf4e00000030 ffffaf4e11137ce0 ffffaf4e11137c80
[ 1556.014777] Call Trace:
[ 1556.014781]  [<ffffffff942f7a12>] dump_stack+0x63/0x81
[ 1556.014785]  [<ffffffffc00759f2>] spl_dumpstack+0x42/0x50 [spl]
[ 1556.014787]  [<ffffffffc0075ac5>] spl_panic+0xc5/0x100 [spl]
[ 1556.014806]  [<ffffffffc016ec26>] ? dbuf_rele+0x36/0x40 [zfs]
[ 1556.014816]  [<ffffffffc0190107>] ? dnode_hold_impl+0xb57/0xc40 [zfs]
[ 1556.014825]  [<ffffffffc0190443>] ? dnode_setdirty+0x83/0x100 [zfs]
[ 1556.014826]  [<ffffffff945671e2>] ? mutex_lock+0x12/0x30
[ 1556.014839]  [<ffffffffc01bf84b>] ? multilist_sublist_unlock+0x2b/0x40 [zfs]
[ 1556.014848]  [<ffffffffc019020b>] ? dnode_hold+0x1b/0x20 [zfs]
[ 1556.014857]  [<ffffffffc017aa7a>] dmu_free_long_object_impl.part.11+0xba/0xf0 [zfs]
[ 1556.014865]  [<ffffffffc017ab24>] dmu_free_long_object_raw+0x34/0x40 [zfs]
[ 1556.014873]  [<ffffffffc0187858>] receive_freeobjects.isra.11+0x58/0x110 [zfs]
[ 1556.014881]  [<ffffffffc0187cb5>] receive_writer_thread+0x3a5/0xd50 [zfs]
[ 1556.014883]  [<ffffffff941ce021>] ? __slab_free+0xa1/0x2e0
[ 1556.014884]  [<ffffffff940a5200>] ? set_next_entity+0x70/0x890
[ 1556.014886]  [<ffffffffc006ff53>] ? spl_kmem_free+0x33/0x40 [spl]
[ 1556.014887]  [<ffffffffc00725d0>] ? __thread_exit+0x20/0x20 [spl]
[ 1556.014895]  [<ffffffffc0187910>] ? receive_freeobjects.isra.11+0x110/0x110 [zfs]
[ 1556.014896]  [<ffffffffc00725d0>] ? __thread_exit+0x20/0x20 [spl]
[ 1556.014898]  [<ffffffffc0072642>] thread_generic_wrapper+0x72/0x80 [spl]
[ 1556.014899]  [<ffffffff9408e457>] kthread+0xd7/0xf0
[ 1556.014899]  [<ffffffff9408e380>] ? kthread_park+0x60/0x60
[ 1556.014901]  [<ffffffff9456a155>] ret_from_fork+0x25/0x30
[ 1721.304223] INFO: task txg_quiesce:468 blocked for more than 120 seconds.
[ 1721.304317]       Tainted: P           O    4.9.58 #1-NixOS
[ 1721.304376] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1721.304456] txg_quiesce     D    0   468      2 0x00000000
[ 1721.304463]  ffff92d789a44400 0000000000000000 ffff92d7afbd7ec0 ffff92d783539a80
[ 1721.304469]  ffff92d78c355cc0 ffffaf4e10e1fd30 ffffffff94565082 ffffaf4e10e1fd00
[ 1721.304474]  0000000000000246 0000000180200010 ffffaf4e10e1fd50 ffff92d783539a80
[ 1721.304479] Call Trace:
[ 1721.304493]  [<ffffffff94565082>] ? __schedule+0x192/0x660
[ 1721.304500]  [<ffffffff94565586>] schedule+0x36/0x80
[ 1721.304511]  [<ffffffffc0077cb8>] cv_wait_common+0x128/0x140 [spl]
[ 1721.304518]  [<ffffffff940ad390>] ? wake_atomic_t_function+0x60/0x60
[ 1721.304525]  [<ffffffffc0077ce5>] __cv_wait+0x15/0x20 [spl]
[ 1721.304591]  [<ffffffffc01de633>] txg_quiesce_thread+0x2e3/0x3f0 [zfs]
[ 1721.304640]  [<ffffffffc01de350>] ? txg_wait_open+0x100/0x100 [zfs]
[ 1721.304647]  [<ffffffffc00725d0>] ? __thread_exit+0x20/0x20 [spl]
[ 1721.304654]  [<ffffffffc0072642>] thread_generic_wrapper+0x72/0x80 [spl]
[ 1721.304658]  [<ffffffff9408e457>] kthread+0xd7/0xf0
[ 1721.304662]  [<ffffffff9408e380>] ? kthread_park+0x60/0x60
[ 1721.304664]  [<ffffffff9408e380>] ? kthread_park+0x60/0x60
[ 1721.304669]  [<ffffffff9456a155>] ret_from_fork+0x25/0x30
[ 1721.304686] INFO: task zfs:15048 blocked for more than 120 seconds.
[ 1721.304753]       Tainted: P           O    4.9.58 #1-NixOS
[ 1721.304810] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1721.304890] zfs             D    0 15048  15040 0x00000000
[ 1721.304894]  ffff92d7617f0400 0000000000000000 ffff92d7afb57ec0 ffff92d78bf7cf80
[ 1721.304900]  ffff92d78c354240 ffffaf4e032438c8 ffffffff94565082 ffffffffc016ec26
[ 1721.304904]  ffff92d68c3e9730 0000000000000001 ffffaf4e032438d0 ffff92d78bf7cf80
[ 1721.304909] Call Trace:
[ 1721.304916]  [<ffffffff94565082>] ? __schedule+0x192/0x660
[ 1721.304953]  [<ffffffffc016ec26>] ? dbuf_rele+0x36/0x40 [zfs]
[ 1721.304959]  [<ffffffff94565586>] schedule+0x36/0x80
[ 1721.304967]  [<ffffffffc0077cb8>] cv_wait_common+0x128/0x140 [spl]
[ 1721.304972]  [<ffffffff940ad390>] ? wake_atomic_t_function+0x60/0x60
[ 1721.304979]  [<ffffffffc0077ce5>] __cv_wait+0x15/0x20 [spl]
[ 1721.305015]  [<ffffffffc0173f92>] bqueue_enqueue+0x62/0xe0 [zfs]
[ 1721.305059]  [<ffffffffc01898c1>] dmu_recv_stream+0x691/0x11c0 [zfs]
[ 1721.305066]  [<ffffffffc009062a>] ? nv_mem_zalloc.isra.12+0x2a/0x40 [znvpair]
[ 1721.305116]  [<ffffffffc02108fa>] ? zfs_set_prop_nvlist+0x2fa/0x510 [zfs]
[ 1721.305190]  [<ffffffffc0211057>] zfs_ioc_recv_impl+0x407/0x1170 [zfs]
[ 1721.305241]  [<ffffffffc02123f9>] zfs_ioc_recv_new+0x369/0x400 [zfs]
[ 1721.305254]  [<ffffffffc00702cc>] ? spl_kmem_alloc_impl+0x9c/0x180 [spl]
[ 1721.305263]  [<ffffffffc00724a9>] ? spl_vmem_alloc+0x19/0x20 [spl]
[ 1721.305270]  [<ffffffffc00958af>] ? nv_alloc_sleep_spl+0x1f/0x30 [znvpair]
[ 1721.305276]  [<ffffffffc009062a>] ? nv_mem_zalloc.isra.12+0x2a/0x40 [znvpair]
[ 1721.305283]  [<ffffffffc00906ff>] ? nvlist_xalloc.part.13+0x5f/0xc0 [znvpair]
[ 1721.305330]  [<ffffffffc020f0eb>] zfsdev_ioctl+0x20b/0x660 [zfs]
[ 1721.305340]  [<ffffffff941ff604>] do_vfs_ioctl+0x94/0x5c0
[ 1721.305347]  [<ffffffff9405dece>] ? __do_page_fault+0x25e/0x4c0
[ 1721.305352]  [<ffffffff941ffba9>] SyS_ioctl+0x79/0x90
[ 1721.305359]  [<ffffffff94569ef7>] entry_SYSCALL_64_fastpath+0x1a/0xa9
[ 1721.305366] INFO: task receive_writer:18808 blocked for more than 120 seconds.
[ 1721.305442]       Tainted: P           O    4.9.58 #1-NixOS
[ 1721.305500] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1721.305637] receive_writer  D    0 18808      2 0x00000000
[ 1721.305644]  ffff92d789a44400 0000000000000000 ffff92d7afb57ec0 ffff92d78bf7ea00
[ 1721.305654]  ffff92d78bf78000 ffffaf4e11137b30 ffffffff94565082 0000000000000000
[ 1721.305662]  ffffffffc02af5d0 00ffffffc02c51d0 0000000000000001 ffff92d78bf7ea00
[ 1721.305671] Call Trace:
[ 1721.305681]  [<ffffffff94565082>] ? __schedule+0x192/0x660
[ 1721.305691]  [<ffffffff94565586>] schedule+0x36/0x80
[ 1721.305703]  [<ffffffffc0075aeb>] spl_panic+0xeb/0x100 [spl]
[ 1721.305765]  [<ffffffffc016ec26>] ? dbuf_rele+0x36/0x40 [zfs]
[ 1721.305821]  [<ffffffffc0190107>] ? dnode_hold_impl+0xb57/0xc40 [zfs]
[ 1721.305873]  [<ffffffffc0190443>] ? dnode_setdirty+0x83/0x100 [zfs]
[ 1721.305879]  [<ffffffff945671e2>] ? mutex_lock+0x12/0x30
[ 1721.305943]  [<ffffffffc01bf84b>] ? multilist_sublist_unlock+0x2b/0x40 [zfs]
[ 1721.305997]  [<ffffffffc019020b>] ? dnode_hold+0x1b/0x20 [zfs]
[ 1721.306051]  [<ffffffffc017aa7a>] dmu_free_long_object_impl.part.11+0xba/0xf0 [zfs]
[ 1721.306102]  [<ffffffffc017ab24>] dmu_free_long_object_raw+0x34/0x40 [zfs]
[ 1721.306147]  [<ffffffffc0187858>] receive_freeobjects.isra.11+0x58/0x110 [zfs]
[ 1721.306207]  [<ffffffffc0187cb5>] receive_writer_thread+0x3a5/0xd50 [zfs]
[ 1721.306214]  [<ffffffff941ce021>] ? __slab_free+0xa1/0x2e0
[ 1721.306221]  [<ffffffff940a5200>] ? set_next_entity+0x70/0x890
[ 1721.306231]  [<ffffffffc006ff53>] ? spl_kmem_free+0x33/0x40 [spl]
[ 1721.306244]  [<ffffffffc00725d0>] ? __thread_exit+0x20/0x20 [spl]
[ 1721.306288]  [<ffffffffc0187910>] ? receive_freeobjects.isra.11+0x110/0x110 [zfs]
[ 1721.306296]  [<ffffffffc00725d0>] ? __thread_exit+0x20/0x20 [spl]
[ 1721.306303]  [<ffffffffc0072642>] thread_generic_wrapper+0x72/0x80 [spl]
[ 1721.306307]  [<ffffffff9408e457>] kthread+0xd7/0xf0
[ 1721.306316]  [<ffffffff9408e380>] ? kthread_park+0x60/0x60
[ 1721.306323]  [<ffffffff9456a155>] ret_from_fork+0x25/0x30
[ 1844.182739] INFO: task txg_quiesce:468 blocked for more than 120 seconds.
[ 1844.182831]       Tainted: P           O    4.9.58 #1-NixOS
[ 1844.182889] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1844.182967] txg_quiesce     D    0   468      2 0x00000000
[ 1844.182974]  ffff92d789a44400 0000000000000000 ffff92d7afbd7ec0 ffff92d783539a80
[ 1844.182980]  ffff92d78c355cc0 ffffaf4e10e1fd30 ffffffff94565082 ffffaf4e10e1fd00
[ 1844.182985]  0000000000000246 0000000180200010 ffffaf4e10e1fd50 ffff92d783539a80
[ 1844.182990] Call Trace:
[ 1844.183003]  [<ffffffff94565082>] ? __schedule+0x192/0x660
[ 1844.183009]  [<ffffffff94565586>] schedule+0x36/0x80
[ 1844.183019]  [<ffffffffc0077cb8>] cv_wait_common+0x128/0x140 [spl]
[ 1844.183026]  [<ffffffff940ad390>] ? wake_atomic_t_function+0x60/0x60
[ 1844.183033]  [<ffffffffc0077ce5>] __cv_wait+0x15/0x20 [spl]
[ 1844.183094]  [<ffffffffc01de633>] txg_quiesce_thread+0x2e3/0x3f0 [zfs]
[ 1844.183144]  [<ffffffffc01de350>] ? txg_wait_open+0x100/0x100 [zfs]
[ 1844.183151]  [<ffffffffc00725d0>] ? __thread_exit+0x20/0x20 [spl]
[ 1844.183157]  [<ffffffffc0072642>] thread_generic_wrapper+0x72/0x80 [spl]
[ 1844.183162]  [<ffffffff9408e457>] kthread+0xd7/0xf0
[ 1844.183165]  [<ffffffff9408e380>] ? kthread_park+0x60/0x60
[ 1844.183168]  [<ffffffff9408e380>] ? kthread_park+0x60/0x60
[ 1844.183172]  [<ffffffff9456a155>] ret_from_fork+0x25/0x30
[ 1844.183190] INFO: task zfs:15048 blocked for more than 120 seconds.
[ 1844.183255]       Tainted: P           O    4.9.58 #1-NixOS
[ 1844.183312] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1844.183392] zfs             D    0 15048  15040 0x00000000
[ 1844.183397]  ffff92d7617f0400 0000000000000000 ffff92d7afb57ec0 ffff92d78bf7cf80
[ 1844.183402]  ffff92d78c354240 ffffaf4e032438c8 ffffffff94565082 ffffffffc016ec26
[ 1844.183407]  ffff92d68c3e9730 0000000000000001 ffffaf4e032438d0 ffff92d78bf7cf80
[ 1844.183411] Call Trace:
[ 1844.183418]  [<ffffffff94565082>] ? __schedule+0x192/0x660
[ 1844.183455]  [<ffffffffc016ec26>] ? dbuf_rele+0x36/0x40 [zfs]
[ 1844.183461]  [<ffffffff94565586>] schedule+0x36/0x80
[ 1844.183468]  [<ffffffffc0077cb8>] cv_wait_common+0x128/0x140 [spl]
[ 1844.183473]  [<ffffffff940ad390>] ? wake_atomic_t_function+0x60/0x60
[ 1844.183480]  [<ffffffffc0077ce5>] __cv_wait+0x15/0x20 [spl]
[ 1844.183516]  [<ffffffffc0173f92>] bqueue_enqueue+0x62/0xe0 [zfs]
[ 1844.183560]  [<ffffffffc01898c1>] dmu_recv_stream+0x691/0x11c0 [zfs]
[ 1844.183567]  [<ffffffffc009062a>] ? nv_mem_zalloc.isra.12+0x2a/0x40 [znvpair]
[ 1844.183617]  [<ffffffffc02108fa>] ? zfs_set_prop_nvlist+0x2fa/0x510 [zfs]
[ 1844.183663]  [<ffffffffc0211057>] zfs_ioc_recv_impl+0x407/0x1170 [zfs]
[ 1844.183731]  [<ffffffffc02123f9>] zfs_ioc_recv_new+0x369/0x400 [zfs]
[ 1844.183744]  [<ffffffffc00702cc>] ? spl_kmem_alloc_impl+0x9c/0x180 [spl]
[ 1844.183751]  [<ffffffffc00724a9>] ? spl_vmem_alloc+0x19/0x20 [spl]
[ 1844.183758]  [<ffffffffc00958af>] ? nv_alloc_sleep_spl+0x1f/0x30 [znvpair]
[ 1844.183764]  [<ffffffffc009062a>] ? nv_mem_zalloc.isra.12+0x2a/0x40 [znvpair]
[ 1844.183770]  [<ffffffffc00906ff>] ? nvlist_xalloc.part.13+0x5f/0xc0 [znvpair]
[ 1844.183819]  [<ffffffffc020f0eb>] zfsdev_ioctl+0x20b/0x660 [zfs]
[ 1844.183828]  [<ffffffff941ff604>] do_vfs_ioctl+0x94/0x5c0
[ 1844.183835]  [<ffffffff9405dece>] ? __do_page_fault+0x25e/0x4c0
[ 1844.183841]  [<ffffffff941ffba9>] SyS_ioctl+0x79/0x90
[ 1844.183846]  [<ffffffff94569ef7>] entry_SYSCALL_64_fastpath+0x1a/0xa9
[ 1844.183852] INFO: task receive_writer:18808 blocked for more than 120 seconds.
[ 1844.183929]       Tainted: P           O    4.9.58 #1-NixOS
[ 1844.183987] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1844.184067] receive_writer  D    0 18808      2 0x00000000
[ 1844.184071]  ffff92d789a44400 0000000000000000 ffff92d7afb57ec0 ffff92d78bf7ea00
[ 1844.184076]  ffff92d78bf78000 ffffaf4e11137b30 ffffffff94565082 0000000000000000
[ 1844.184082]  ffffffffc02af5d0 00ffffffc02c51d0 0000000000000001 ffff92d78bf7ea00
[ 1844.184088] Call Trace:
[ 1844.184095]  [<ffffffff94565082>] ? __schedule+0x192/0x660
[ 1844.184104]  [<ffffffff94565586>] schedule+0x36/0x80
[ 1844.184112]  [<ffffffffc0075aeb>] spl_panic+0xeb/0x100 [spl]
[ 1844.184149]  [<ffffffffc016ec26>] ? dbuf_rele+0x36/0x40 [zfs]
[ 1844.184196]  [<ffffffffc0190107>] ? dnode_hold_impl+0xb57/0xc40 [zfs]
[ 1844.184256]  [<ffffffffc0190443>] ? dnode_setdirty+0x83/0x100 [zfs]
[ 1844.184262]  [<ffffffff945671e2>] ? mutex_lock+0x12/0x30
[ 1844.184334]  [<ffffffffc01bf84b>] ? multilist_sublist_unlock+0x2b/0x40 [zfs]
[ 1844.184391]  [<ffffffffc019020b>] ? dnode_hold+0x1b/0x20 [zfs]
[ 1844.184446]  [<ffffffffc017aa7a>] dmu_free_long_object_impl.part.11+0xba/0xf0 [zfs]
[ 1844.184498]  [<ffffffffc017ab24>] dmu_free_long_object_raw+0x34/0x40 [zfs]
[ 1844.184557]  [<ffffffffc0187858>] receive_freeobjects.isra.11+0x58/0x110 [zfs]
[ 1844.184607]  [<ffffffffc0187cb5>] receive_writer_thread+0x3a5/0xd50 [zfs]
[ 1844.184616]  [<ffffffff941ce021>] ? __slab_free+0xa1/0x2e0
[ 1844.184622]  [<ffffffff940a5200>] ? set_next_entity+0x70/0x890
[ 1844.184630]  [<ffffffffc006ff53>] ? spl_kmem_free+0x33/0x40 [spl]
[ 1844.184642]  [<ffffffffc00725d0>] ? __thread_exit+0x20/0x20 [spl]
[ 1844.184705]  [<ffffffffc0187910>] ? receive_freeobjects.isra.11+0x110/0x110 [zfs]
[ 1844.184714]  [<ffffffffc00725d0>] ? __thread_exit+0x20/0x20 [spl]
[ 1844.184719]  [<ffffffffc0072642>] thread_generic_wrapper+0x72/0x80 [spl]
[ 1844.184724]  [<ffffffff9408e457>] kthread+0xd7/0xf0
[ 1844.184730]  [<ffffffff9408e380>] ? kthread_park+0x60/0x60
[ 1844.184736]  [<ffffffff9456a155>] ret_from_fork+0x25/0x30
[ 1967.061258] INFO: task txg_quiesce:468 blocked for more than 120 seconds.
[ 1967.061345]       Tainted: P           O    4.9.58 #1-NixOS
[ 1967.061403] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1967.061483] txg_quiesce     D    0   468      2 0x00000000
[ 1967.061490]  ffff92d789a44400 0000000000000000 ffff92d7afbd7ec0 ffff92d783539a80
[ 1967.061496]  ffff92d78c355cc0 ffffaf4e10e1fd30 ffffffff94565082 ffffaf4e10e1fd00
[ 1967.061501]  0000000000000246 0000000180200010 ffffaf4e10e1fd50 ffff92d783539a80
[ 1967.061507] Call Trace:
[ 1967.061521]  [<ffffffff94565082>] ? __schedule+0x192/0x660
[ 1967.061528]  [<ffffffff94565586>] schedule+0x36/0x80
[ 1967.061539]  [<ffffffffc0077cb8>] cv_wait_common+0x128/0x140 [spl]
[ 1967.061546]  [<ffffffff940ad390>] ? wake_atomic_t_function+0x60/0x60
[ 1967.061554]  [<ffffffffc0077ce5>] __cv_wait+0x15/0x20 [spl]
[ 1967.061638]  [<ffffffffc01de633>] txg_quiesce_thread+0x2e3/0x3f0 [zfs]
[ 1967.061690]  [<ffffffffc01de350>] ? txg_wait_open+0x100/0x100 [zfs]
[ 1967.061697]  [<ffffffffc00725d0>] ? __thread_exit+0x20/0x20 [spl]
[ 1967.061704]  [<ffffffffc0072642>] thread_generic_wrapper+0x72/0x80 [spl]
[ 1967.061708]  [<ffffffff9408e457>] kthread+0xd7/0xf0
[ 1967.061712]  [<ffffffff9408e380>] ? kthread_park+0x60/0x60
[ 1967.061714]  [<ffffffff9408e380>] ? kthread_park+0x60/0x60
[ 1967.061719]  [<ffffffff9456a155>] ret_from_fork+0x25/0x30
[ 1967.061738] INFO: task zfs:15048 blocked for more than 120 seconds.
[ 1967.061804]       Tainted: P           O    4.9.58 #1-NixOS
[ 1967.061862] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1967.061942] zfs             D    0 15048  15040 0x00000000
[ 1967.061947]  ffff92d7617f0400 0000000000000000 ffff92d7afb57ec0 ffff92d78bf7cf80
[ 1967.061952]  ffff92d78c354240 ffffaf4e032438c8 ffffffff94565082 ffffffffc016ec26
[ 1967.061957]  ffff92d68c3e9730 0000000000000001 ffffaf4e032438d0 ffff92d78bf7cf80
[ 1967.061961] Call Trace:
[ 1967.061968]  [<ffffffff94565082>] ? __schedule+0x192/0x660
[ 1967.062006]  [<ffffffffc016ec26>] ? dbuf_rele+0x36/0x40 [zfs]
[ 1967.062012]  [<ffffffff94565586>] schedule+0x36/0x80
[ 1967.062020]  [<ffffffffc0077cb8>] cv_wait_common+0x128/0x140 [spl]
[ 1967.062024]  [<ffffffff940ad390>] ? wake_atomic_t_function+0x60/0x60
[ 1967.062032]  [<ffffffffc0077ce5>] __cv_wait+0x15/0x20 [spl]
[ 1967.062068]  [<ffffffffc0173f92>] bqueue_enqueue+0x62/0xe0 [zfs]
[ 1967.062112]  [<ffffffffc01898c1>] dmu_recv_stream+0x691/0x11c0 [zfs]
[ 1967.062119]  [<ffffffffc009062a>] ? nv_mem_zalloc.isra.12+0x2a/0x40 [znvpair]
[ 1967.062170]  [<ffffffffc02108fa>] ? zfs_set_prop_nvlist+0x2fa/0x510 [zfs]
[ 1967.062243]  [<ffffffffc0211057>] zfs_ioc_recv_impl+0x407/0x1170 [zfs]
[ 1967.062293]  [<ffffffffc02123f9>] zfs_ioc_recv_new+0x369/0x400 [zfs]
[ 1967.062306]  [<ffffffffc00702cc>] ? spl_kmem_alloc_impl+0x9c/0x180 [spl]
[ 1967.062315]  [<ffffffffc00724a9>] ? spl_vmem_alloc+0x19/0x20 [spl]
[ 1967.062322]  [<ffffffffc00958af>] ? nv_alloc_sleep_spl+0x1f/0x30 [znvpair]
[ 1967.062328]  [<ffffffffc009062a>] ? nv_mem_zalloc.isra.12+0x2a/0x40 [znvpair]
[ 1967.062335]  [<ffffffffc00906ff>] ? nvlist_xalloc.part.13+0x5f/0xc0 [znvpair]
[ 1967.062381]  [<ffffffffc020f0eb>] zfsdev_ioctl+0x20b/0x660 [zfs]
[ 1967.062391]  [<ffffffff941ff604>] do_vfs_ioctl+0x94/0x5c0
[ 1967.062398]  [<ffffffff9405dece>] ? __do_page_fault+0x25e/0x4c0
[ 1967.062403]  [<ffffffff941ffba9>] SyS_ioctl+0x79/0x90
[ 1967.062410]  [<ffffffff94569ef7>] entry_SYSCALL_64_fastpath+0x1a/0xa9
[ 1967.062418] INFO: task receive_writer:18808 blocked for more than 120 seconds.
[ 1967.062494]       Tainted: P           O    4.9.58 #1-NixOS
[ 1967.062552] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1967.062632] receive_writer  D    0 18808      2 0x00000000
[ 1967.062637]  ffff92d789a44400 0000000000000000 ffff92d7afb57ec0 ffff92d78bf7ea00
[ 1967.062645]  ffff92d78bf78000 ffffaf4e11137b30 ffffffff94565082 0000000000000000
[ 1967.062649]  ffffffffc02af5d0 00ffffffc02c51d0 0000000000000001 ffff92d78bf7ea00
[ 1967.062654] Call Trace:
[ 1967.062661]  [<ffffffff94565082>] ? __schedule+0x192/0x660
[ 1967.062667]  [<ffffffff94565586>] schedule+0x36/0x80
[ 1967.062677]  [<ffffffffc0075aeb>] spl_panic+0xeb/0x100 [spl]
[ 1967.062714]  [<ffffffffc016ec26>] ? dbuf_rele+0x36/0x40 [zfs]
[ 1967.062760]  [<ffffffffc0190107>] ? dnode_hold_impl+0xb57/0xc40 [zfs]
[ 1967.062804]  [<ffffffffc0190443>] ? dnode_setdirty+0x83/0x100 [zfs]
[ 1967.062813]  [<ffffffff945671e2>] ? mutex_lock+0x12/0x30
[ 1967.062870]  [<ffffffffc01bf84b>] ? multilist_sublist_unlock+0x2b/0x40 [zfs]
[ 1967.062913]  [<ffffffffc019020b>] ? dnode_hold+0x1b/0x20 [zfs]
[ 1967.062955]  [<ffffffffc017aa7a>] dmu_free_long_object_impl.part.11+0xba/0xf0 [zfs]
[ 1967.062995]  [<ffffffffc017ab24>] dmu_free_long_object_raw+0x34/0x40 [zfs]
[ 1967.063038]  [<ffffffffc0187858>] receive_freeobjects.isra.11+0x58/0x110 [zfs]
[ 1967.063077]  [<ffffffffc0187cb5>] receive_writer_thread+0x3a5/0xd50 [zfs]
[ 1967.063085]  [<ffffffff941ce021>] ? __slab_free+0xa1/0x2e0
[ 1967.063092]  [<ffffffff940a5200>] ? set_next_entity+0x70/0x890
[ 1967.063099]  [<ffffffffc006ff53>] ? spl_kmem_free+0x33/0x40 [spl]
[ 1967.063105]  [<ffffffffc00725d0>] ? __thread_exit+0x20/0x20 [spl]
[ 1967.063145]  [<ffffffffc0187910>] ? receive_freeobjects.isra.11+0x110/0x110 [zfs]
[ 1967.063153]  [<ffffffffc00725d0>] ? __thread_exit+0x20/0x20 [spl]
[ 1967.063168]  [<ffffffffc0072642>] thread_generic_wrapper+0x72/0x80 [spl]
[ 1967.063174]  [<ffffffff9408e457>] kthread+0xd7/0xf0
[ 1967.063180]  [<ffffffff9408e380>] ? kthread_park+0x60/0x60
[ 1967.063187]  [<ffffffff9456a155>] ret_from_fork+0x25/0x30
[ 2089.939773] INFO: task txg_quiesce:468 blocked for more than 120 seconds.
[ 2089.939861]       Tainted: P           O    4.9.58 #1-NixOS
[ 2089.939919] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 2089.940000] txg_quiesce     D    0   468      2 0x00000000
[ 2089.940007]  ffff92d789a44400 0000000000000000 ffff92d7afbd7ec0 ffff92d783539a80
[ 2089.940013]  ffff92d78c355cc0 ffffaf4e10e1fd30 ffffffff94565082 ffffaf4e10e1fd00
[ 2089.940018]  0000000000000246 0000000180200010 ffffaf4e10e1fd50 ffff92d783539a80
[ 2089.940023] Call Trace:
[ 2089.940037]  [<ffffffff94565082>] ? __schedule+0x192/0x660
[ 2089.940043]  [<ffffffff94565586>] schedule+0x36/0x80
[ 2089.940054]  [<ffffffffc0077cb8>] cv_wait_common+0x128/0x140 [spl]
[ 2089.940061]  [<ffffffff940ad390>] ? wake_atomic_t_function+0x60/0x60
[ 2089.940068]  [<ffffffffc0077ce5>] __cv_wait+0x15/0x20 [spl]
[ 2089.940132]  [<ffffffffc01de633>] txg_quiesce_thread+0x2e3/0x3f0 [zfs]
[ 2089.940180]  [<ffffffffc01de350>] ? txg_wait_open+0x100/0x100 [zfs]
[ 2089.940188]  [<ffffffffc00725d0>] ? __thread_exit+0x20/0x20 [spl]
[ 2089.940194]  [<ffffffffc0072642>] thread_generic_wrapper+0x72/0x80 [spl]
[ 2089.940199]  [<ffffffff9408e457>] kthread+0xd7/0xf0
[ 2089.940202]  [<ffffffff9408e380>] ? kthread_park+0x60/0x60
[ 2089.940205]  [<ffffffff9408e380>] ? kthread_park+0x60/0x60

As said, if I don't use the -R option on first dataset sending to server it works fine, but when I then try to send an incremental snapshot the same thing happens.

I also tried to send the snapshots to an encrypted child dataset on the server with the same results.

tcaputi commented 7 years ago

Well thats interesting.... you don't have an object 39261 in your send stream.... I'll see what I can figure out.

sjau commented 7 years ago

ah, it's 393261... let's retry...

tcaputi commented 7 years ago

Let me give you a slightly modified command. Sorry:

zfs send tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-19h00 | zstreamdump -v | grep -wE 'OBJECT|FREE' | grep 'object = 393261'
sjau commented 7 years ago

no problem.

sjau commented 7 years ago

There we go:

root@subi:~# zfs send tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-19h00 | zstreamdump -v | grep -wE 'OBJECT|FREE' | grep 'object = 393261'
OBJECT object = 393261 type = 19 bonustype = 44 blksz = 27648 bonuslen = 168 dn_slots = 1 raw_bonuslen = 0 flags = 0 indblkshift = 0 nlevels = 0 nblkptr = 0
FREE object = 393261 offset = 27648 length = -1
tcaputi commented 7 years ago

And recordsize for these blocks is the default 128k?

tcaputi commented 7 years ago

Oh. I;m sorry. I forgot to put your -wR flags in the command. My fault:

zfs send -wR tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-19h00 | zstreamdump -v | grep -wE 'OBJECT|FREE' | grep 'object = 393261'
sjau commented 7 years ago

How to check that - the recordsize I mean.

tcaputi commented 7 years ago

How to check that?

zfs get recordsize <dataset name>

If you don't know its probably the default.

sjau commented 7 years ago

It is:

zfs get recordsize tank/encZFS/Nixos
NAME               PROPERTY    VALUE    SOURCE
tank/encZFS/Nixos  recordsize  128K     default
tcaputi commented 7 years ago

Would you mind runnining the revised command above (https://github.com/zfsonlinux/zfs/issues/6821#issuecomment-344373926)? Sorry about that.

sjau commented 7 years ago

already doing so :)

sjau commented 7 years ago
root@subi:~# zfs send -wR tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-19h00 | zstreamdump -v | grep -wE 'OBJECT|FREE' | grep 'object = 393261'
OBJECT object = 393261 type = 19 bonustype = 44 blksz = 131072 bonuslen = 168 dn_slots = 1 raw_bonuslen = 320 flags = 0 indblkshift = 17 nlevels = 2 nblkptr = 1
FREE object = 393261 offset = 10616832 length = -1
FREE object = 393261 offset = 1572864 length = 132644864
OBJECT object = 393261 type = 19 bonustype = 44 blksz = 27648 bonuslen = 168 dn_slots = 1 raw_bonuslen = 320 flags = 0 indblkshift = 17 nlevels = 1 nblkptr = 1
FREE object = 393261 offset = 27648 length = -1
OBJECT object = 393261 type = 19 bonustype = 44 blksz = 27648 bonuslen = 168 dn_slots = 1 raw_bonuslen = 320 flags = 0 indblkshift = 17 nlevels = 1 nblkptr = 1
FREE object = 393261 offset = 27648 length = -1
OBJECT object = 393261 type = 19 bonustype = 44 blksz = 27648 bonuslen = 168 dn_slots = 1 raw_bonuslen = 320 flags = 0 indblkshift = 17 nlevels = 1 nblkptr = 1
FREE object = 393261 offset = 27648 length = -1
OBJECT object = 393261 type = 19 bonustype = 44 blksz = 27648 bonuslen = 168 dn_slots = 1 raw_bonuslen = 320 flags = 0 indblkshift = 17 nlevels = 1 nblkptr = 1
FREE object = 393261 offset = 27648 length = -1
OBJECT object = 393261 type = 19 bonustype = 44 blksz = 27648 bonuslen = 168 dn_slots = 1 raw_bonuslen = 320 flags = 0 indblkshift = 17 nlevels = 1 nblkptr = 1
FREE object = 393261 offset = 27648 length = -1
OBJECT object = 393261 type = 19 bonustype = 44 blksz = 27648 bonuslen = 168 dn_slots = 1 raw_bonuslen = 320 flags = 0 indblkshift = 17 nlevels = 1 nblkptr = 1
FREE object = 393261 offset = 27648 length = -1
tcaputi commented 7 years ago

Thanks a lot for all the info. I'll see what I can do from here.

sjau commented 7 years ago

Well, thanks for your support and help :) providing the info is just about the only thing I can do.

tcaputi commented 7 years ago

I see what it going on now. The DRR_OBJECT record is attempting to reduce dn->dn_nlevels before the DRR_FREE record has actually freed that data, which is triggering the assert. it will take me a little bit to come up with a fix, but I should have something soon.

sjau commented 7 years ago

no worries :) there's plenty of things on your table with that whole other rework thing...

tcaputi commented 7 years ago

One more bit of info just to make sure I'm right. Can you paste the zfs send and zfs receive commands you are using to trigger the panic?

sjau commented 7 years ago

Sure

zfs send -wR tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-19h00 | ssh root@10.200.0.3 "zfs receive serviTank/encZFS/BU/subi/Nixos"

sjau commented 7 years ago

Without the "-R" flag the raw sending works... with the -R flag the issues is triggered.

Testing it again without -R flag, just to be sure if that's still correct what I said.

tcaputi commented 7 years ago

Does your pool include any clones of encrypted datasets? That would explain why it only crashes with -R or on incrementals

sjau commented 7 years ago

not sure what you mean with clones of encrypted datasets.

tcaputi commented 7 years ago

not sure what you mean with clones of encrypted datasets.

Can you run zfs get origin,encryption and paste the output?

sjau commented 7 years ago

Output on receiving end:

NAME                                                                 PROPERTY    VALUE        SOURCE
serviTank                                                            origin      -            -
serviTank                                                            encryption  off          default
serviTank/encZFS                                                     origin      -            -
serviTank/encZFS                                                     encryption  aes-256-gcm  -
serviTank/encZFS/BU                                                  origin      -            -
serviTank/encZFS/BU                                                  encryption  aes-256-gcm  -
serviTank/encZFS/BU/jus-law                                          origin      -            -
serviTank/encZFS/BU/jus-law                                          encryption  aes-256-gcm  -
serviTank/encZFS/BU/jus-law/Nixos-2017-11-14_13-00                   origin      -            -
serviTank/encZFS/BU/jus-law/Nixos-2017-11-14_13-00                   encryption  aes-256-gcm  -
serviTank/encZFS/BU/jus-law/Nixos-2017-11-14_13-00@2017-11-14_13-00  origin      -            -
serviTank/encZFS/BU/jus-law/Nixos-2017-11-14_13-00@2017-11-14_13-00  encryption  aes-256-gcm  -
serviTank/encZFS/BU/jus-law/Nixos-2017-11-14_21-00                   origin      -            -
serviTank/encZFS/BU/jus-law/Nixos-2017-11-14_21-00                   encryption  aes-256-gcm  -
serviTank/encZFS/BU/jus-law/Nixos-2017-11-14_21-00@2017-11-14_21-00  origin      -            -
serviTank/encZFS/BU/jus-law/Nixos-2017-11-14_21-00@2017-11-14_21-00  encryption  aes-256-gcm  -
serviTank/encZFS/BU/jus-law/VMs-2017-11-14_13-00                     origin      -            -
serviTank/encZFS/BU/jus-law/VMs-2017-11-14_13-00                     encryption  aes-256-gcm  -
serviTank/encZFS/BU/jus-law/VMs-2017-11-14_13-00@2017-11-14_13-00    origin      -            -
serviTank/encZFS/BU/jus-law/VMs-2017-11-14_13-00@2017-11-14_13-00    encryption  aes-256-gcm  -
serviTank/encZFS/BU/jus-law/VMs-2017-11-14_21-00                     origin      -            -
serviTank/encZFS/BU/jus-law/VMs-2017-11-14_21-00                     encryption  aes-256-gcm  -
serviTank/encZFS/BU/subi                                             origin      -            -
serviTank/encZFS/BU/subi                                             encryption  aes-256-gcm  -
serviTank/encZFS/BU/subi/Nixos                                       origin      -            -
serviTank/encZFS/BU/subi/Nixos                                       encryption  aes-256-gcm  -
serviTank/encZFS/Nixos                                               origin      -            -
serviTank/encZFS/Nixos                                               encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_weekly-2017-10-30-00h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_weekly-2017-10-30-00h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_monthly-2017-11-01-00h00        origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_monthly-2017-11-01-00h00        encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_weekly-2017-11-06-00h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_weekly-2017-11-06-00h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-08-00h00          origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-08-00h00          encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-09-00h00          origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-09-00h00          encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-10-00h00          origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-10-00h00          encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-11-00h00          origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-11-00h00          encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-12-00h00          origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-12-00h00          encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-13-00h00          origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-13-00h00          encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_weekly-2017-11-13-00h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_weekly-2017-11-13-00h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-13-22h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-13-22h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-13-23h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-13-23h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-14-00h00          origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-14-00h00          encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-00h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-00h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-01h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-01h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-02h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-02h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-03h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-03h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-04h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-04h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-05h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-05h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-06h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-06h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-07h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-07h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-08h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-08h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-09h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-09h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-10h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-10h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-11h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-11h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-12h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-12h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-13h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-13h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-14h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-14h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-15h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-15h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-16h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-16h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-17h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-17h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-18h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-18h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-19h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-19h00         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_frequent-2017-11-14-19h30       origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_frequent-2017-11-14-19h30       encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_frequent-2017-11-14-20h23       origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_frequent-2017-11-14-20h23       encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-20h23         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-20h23         encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_frequent-2017-11-14-20h30       origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_frequent-2017-11-14-20h30       encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_frequent-2017-11-14-20h45       origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_frequent-2017-11-14-20h45       encryption  aes-256-gcm  -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-21h00         origin      -            -
serviTank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-21h00         encryption  aes-256-gcm  -

Output on sending end:

NAME                                                       PROPERTY    VALUE        SOURCE
tank                                                       origin      -            -
tank                                                       encryption  off          default
tank/encZFS                                                origin      -            -
tank/encZFS                                                encryption  aes-256-gcm  -
tank/encZFS/Media                                          origin      -            -
tank/encZFS/Media                                          encryption  aes-256-gcm  -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-13-22h00    origin      -            -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-13-22h00    encryption  aes-256-gcm  -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-13-23h00    origin      -            -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-13-23h00    encryption  aes-256-gcm  -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-00h00    origin      -            -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-00h00    encryption  aes-256-gcm  -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-01h00    origin      -            -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-01h00    encryption  aes-256-gcm  -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-02h00    origin      -            -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-02h00    encryption  aes-256-gcm  -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-03h00    origin      -            -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-03h00    encryption  aes-256-gcm  -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-04h00    origin      -            -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-04h00    encryption  aes-256-gcm  -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-05h00    origin      -            -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-05h00    encryption  aes-256-gcm  -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-06h00    origin      -            -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-06h00    encryption  aes-256-gcm  -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-07h00    origin      -            -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-07h00    encryption  aes-256-gcm  -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-08h46    origin      -            -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-08h46    encryption  aes-256-gcm  -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-09h00    origin      -            -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-09h00    encryption  aes-256-gcm  -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-10h00    origin      -            -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-10h00    encryption  aes-256-gcm  -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-11h15    origin      -            -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-11h15    encryption  aes-256-gcm  -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-12h05    origin      -            -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-12h05    encryption  aes-256-gcm  -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-13h00    origin      -            -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-13h00    encryption  aes-256-gcm  -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-14h00    origin      -            -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-14h00    encryption  aes-256-gcm  -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-15h00    origin      -            -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-15h00    encryption  aes-256-gcm  -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-16h00    origin      -            -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-16h00    encryption  aes-256-gcm  -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-17h00    origin      -            -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-17h00    encryption  aes-256-gcm  -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-18h00    origin      -            -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-18h00    encryption  aes-256-gcm  -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-19h00    origin      -            -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-19h00    encryption  aes-256-gcm  -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-20h00    origin      -            -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-20h00    encryption  aes-256-gcm  -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-21h00    origin      -            -
tank/encZFS/Media@zfs-auto-snap_hourly-2017-11-14-21h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos                                          origin      -            -
tank/encZFS/Nixos                                          encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_weekly-2017-10-23-00h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_weekly-2017-10-23-00h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_weekly-2017-10-30-00h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_weekly-2017-10-30-00h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_weekly-2017-11-06-00h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_weekly-2017-11-06-00h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-08-00h00     origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-08-00h00     encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-09-00h00     origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-09-00h00     encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-10-00h00     origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-10-00h00     encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-11-00h00     origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-11-00h00     encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-12-00h00     origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-12-00h00     encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_weekly-2017-11-13-00h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_weekly-2017-11-13-00h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-13-00h00     origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-13-00h00     encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-13-22h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-13-22h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-13-23h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-13-23h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-14-00h00     origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_daily-2017-11-14-00h00     encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-00h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-00h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-01h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-01h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-02h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-02h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-03h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-03h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-04h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-04h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-05h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-05h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-06h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-06h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-07h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-07h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-08h46    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-08h46    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-09h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-09h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-10h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-10h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-11h15    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-11h15    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-12h05    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-12h05    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-13h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-13h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-14h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-14h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-15h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-15h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-16h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-16h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-17h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-17h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-18h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-18h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-19h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-19h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_frequent-2017-11-14-19h45  origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_frequent-2017-11-14-19h45  encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-20h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-20h00    encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_frequent-2017-11-14-20h15  origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_frequent-2017-11-14-20h15  encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_frequent-2017-11-14-20h30  origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_frequent-2017-11-14-20h30  encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_frequent-2017-11-14-20h45  origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_frequent-2017-11-14-20h45  encryption  aes-256-gcm  -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-21h00    origin      -            -
tank/encZFS/Nixos@zfs-auto-snap_hourly-2017-11-14-21h00    encryption  aes-256-gcm  -
tank/encZFS/VMs                                            origin      -            -
tank/encZFS/VMs                                            encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_daily-2017-11-11-00h00       origin      -            -
tank/encZFS/VMs@zfs-auto-snap_daily-2017-11-11-00h00       encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_daily-2017-11-12-00h00       origin      -            -
tank/encZFS/VMs@zfs-auto-snap_daily-2017-11-12-00h00       encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_daily-2017-11-13-00h00       origin      -            -
tank/encZFS/VMs@zfs-auto-snap_daily-2017-11-13-00h00       encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-13-22h00      origin      -            -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-13-22h00      encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-13-23h00      origin      -            -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-13-23h00      encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_daily-2017-11-14-00h00       origin      -            -
tank/encZFS/VMs@zfs-auto-snap_daily-2017-11-14-00h00       encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-00h00      origin      -            -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-00h00      encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-01h00      origin      -            -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-01h00      encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-02h00      origin      -            -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-02h00      encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-03h00      origin      -            -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-03h00      encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-04h00      origin      -            -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-04h00      encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-05h00      origin      -            -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-05h00      encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-06h00      origin      -            -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-06h00      encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-07h00      origin      -            -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-07h00      encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-08h46      origin      -            -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-08h46      encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-09h00      origin      -            -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-09h00      encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-10h00      origin      -            -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-10h00      encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-11h15      origin      -            -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-11h15      encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-12h05      origin      -            -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-12h05      encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-13h00      origin      -            -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-13h00      encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-14h00      origin      -            -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-14h00      encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-15h00      origin      -            -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-15h00      encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-16h00      origin      -            -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-16h00      encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-17h00      origin      -            -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-17h00      encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-18h00      origin      -            -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-18h00      encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-19h00      origin      -            -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-19h00      encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_frequent-2017-11-14-19h45    origin      -            -
tank/encZFS/VMs@zfs-auto-snap_frequent-2017-11-14-19h45    encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-20h00      origin      -            -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-20h00      encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_frequent-2017-11-14-20h15    origin      -            -
tank/encZFS/VMs@zfs-auto-snap_frequent-2017-11-14-20h15    encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_frequent-2017-11-14-20h30    origin      -            -
tank/encZFS/VMs@zfs-auto-snap_frequent-2017-11-14-20h30    encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_frequent-2017-11-14-20h45    origin      -            -
tank/encZFS/VMs@zfs-auto-snap_frequent-2017-11-14-20h45    encryption  aes-256-gcm  -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-21h00      origin      -            -
tank/encZFS/VMs@zfs-auto-snap_hourly-2017-11-14-21h00      encryption  aes-256-gcm  -
tcaputi commented 7 years ago

@sjau Can you try this patch and see if it helps? You will need it on the receiving side. https://pastebin.com/WyXSf4Mh

sjau commented 7 years ago

should I also use that debug patch?

tcaputi commented 7 years ago

If it applies cleanly then go for it. Otherwise you can exclude it.

sjau commented 7 years ago

Yeah, both can be applied... so compiling now... and then testing again :)

sjau commented 7 years ago

finished compiling, rebooting server....

sjau commented 7 years ago

Again panic

[  123.889025] wireguard: WireGuard 0.0.20171101 loaded. See www.wireguard.com for information.
[  123.889025] wireguard: Copyright (C) 2015-2017 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
[ 1556.531115] VERIFY3(0 == dmu_object_set_nlevels(rwa->os, drro->drr_object, drro->drr_nlevels, tx)) failed (0 == 22)
[ 1556.531142] PANIC at dmu_send.c:2542:receive_object()
[ 1556.531153] Showing stack for process 8546
[ 1556.531155] CPU: 5 PID: 8546 Comm: receive_writer Tainted: P           O    4.9.61 #1-NixOS
[ 1556.531155] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./Z97 Pro4, BIOS P2.50 05/27/2016
[ 1556.531156]  ffff9f3e90bc7b60 ffffffffad0f7ba2 ffffffffc05f6e64 00000000000009ee
[ 1556.531158]  ffff9f3e90bc7b70 ffffffffc03a65c2 ffff9f3e90bc7cf8 ffffffffc03a6695
[ 1556.531159]  0000000000000000 0000000000000030 ffff9f3e90bc7d08 ffff9f3e90bc7ca8
[ 1556.531160] Call Trace:
[ 1556.531165]  [<ffffffffad0f7ba2>] dump_stack+0x63/0x81
[ 1556.531169]  [<ffffffffc03a65c2>] spl_dumpstack+0x42/0x50 [spl]
[ 1556.531170]  [<ffffffffc03a6695>] spl_panic+0xc5/0x100 [spl]
[ 1556.531191]  [<ffffffffc0534700>] ? __dprintf+0x20/0x160 [zfs]
[ 1556.531192]  [<ffffffffc03a0f53>] ? spl_kmem_free+0x33/0x40 [spl]
[ 1556.531202]  [<ffffffffc04c15f3>] ? dnode_rele_and_unlock+0x53/0x80 [zfs]
[ 1556.531212]  [<ffffffffc04c1659>] ? dnode_rele+0x39/0x40 [zfs]
[ 1556.531221]  [<ffffffffc04b5d97>] receive_object+0x527/0x760 [zfs]
[ 1556.531233]  [<ffffffffc04c1659>] ? dnode_rele+0x39/0x40 [zfs]
[ 1556.531243]  [<ffffffffc04b8f76>] receive_writer_thread+0x3c6/0xd50 [zfs]
[ 1556.531246]  [<ffffffffacfce061>] ? __slab_free+0xa1/0x2e0
[ 1556.531248]  [<ffffffffacea5240>] ? set_next_entity+0x70/0x890
[ 1556.531249]  [<ffffffffc03a0f53>] ? spl_kmem_free+0x33/0x40 [spl]
[ 1556.531250]  [<ffffffffc03a35d0>] ? __thread_exit+0x20/0x20 [spl]
[ 1556.531260]  [<ffffffffc04b8bb0>] ? receive_freeobjects.isra.11+0x110/0x110 [zfs]
[ 1556.531262]  [<ffffffffc03a35d0>] ? __thread_exit+0x20/0x20 [spl]
[ 1556.531264]  [<ffffffffc03a3642>] thread_generic_wrapper+0x72/0x80 [spl]
[ 1556.531266]  [<fffffffface8e497>] kthread+0xd7/0xf0
[ 1556.531266]  [<fffffffface8e3c0>] ? kthread_park+0x60/0x60
[ 1556.531268]  [<ffffffffad36a155>] ret_from_fork+0x25/0x30
root@servi-nixos:~# cat /proc/spl/kstat/zfs/dbgmsg | grep 'HERE'
1510867036   dnode.c:1720:dnode_set_nlevels(): HERE: obj = 393261 nlevels = 1, dn_nlevels = 2
1510867036   dnode.c:1720:dnode_set_nlevels(): HERE: obj = 393261 nlevels = 1, dn_nlevels = 2

Using

  zfsUnstable = common {
    # comment/uncomment if breaking kernel versions are known
    incompatibleKernelVersion = null;

    # this package should point to a version / git revision compatible with the latest kernel release
    version = "2017-11-12";

    rev = "5277f208f290ea4e2204800a66c3ba20a03fe503";
    sha256 = "0hhlhv4g678j1w45813xfrk8zza0af59cdkmib9bkxy0cn0jsnd6";
    isUnstable = true;

    extraPatches = [
      (fetchpatch {
        url = "https://www.sjau.ch/zfs_mic.patch";
        sha256 = "1vq60s26pwi2vsk2amr9jhzcqbs4kzq0xfp8xw9gzp2h2gynpfj7";
      })
      (fetchpatch {
        url = "https://pastebin.com/raw/WyXSf4Mh";
        sha256 = "0afn0ny9ali2n64w8a7883a6cq0fbn9zybzpsc87zf0sx68rwn9s";
      })
      (fetchpatch {
        url = "https://github.com/Mic92/zfs/compare/ded8f06a3cfee...nixos-zfs-2017-09-12.patch";
        sha256 = "033wf4jn0h0kp0h47ai98rywnkv5jwvf3xwym30phnaf8xxdx8aj";
      })
    ];

    spl = splUnstable;
  };
tcaputi commented 7 years ago

hmmmm. alright. ill keep looking into it. Thanks.

sjau commented 7 years ago

Thx for your work

sjau commented 7 years ago

I'm just wondering - am I the only one having this issue? I mean there's more people than me out there using encrypted datasets and I assume they also do backups by zfs send / receive....

tcaputi commented 7 years ago

@sjau As far as I'm aware you are the only one experiencing this problem. ZFS encryption is a very new feature and not completely stable yet (as you can see). It isn't even available in non rolling release distributions, and even then you need to specifically ask for the zfs-git package for the bleeding edge code.

That said, thank you for helping us to get this stable and for your help with debugging.

sjau commented 7 years ago

Well, it's been running perfectly fine for me - except for receving :)

tcaputi commented 7 years ago

Raw sends are definitely the hardest part of the encryption feature to get right (and one of the most valuable IMO). I'm working on a script to reproduce your situation now which will make it a lot easier to debug. I'll see what I can come up with.

tcaputi commented 7 years ago

I think I may have found why this wasn't working. Can you try this (instead of the last patch I gave you)?: https://pastebin.com/D0AHpNvX

sjau commented 7 years ago

will test it... should be ready in about 30 min again - you know, compile, reboot, test, reply :)

sjau commented 7 years ago

Still no luck - btw, do I need to use the same version on the sending side?

echo 1 > /sys/module/zfs/parameters/zfs_dbgmsg_enable
echo 1073741824 > /sys/module/zfs/parameters/zfs_dbgmsg_maxsize
echo 4294967263 > /sys/module/zfs/parameters/zfs_flags
[ 1220.692291] CPU: 0 PID: 374 Comm: dp_sync_taskq Tainted: P      D    O    4.9.61 #1-NixOS
[ 1220.692291] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./Z97 Pro4, BIOS P2.50 05/27/2016
[ 1220.692292] task: ffff9dd3c3b84f80 task.stack: ffffa430504fc000
[ 1220.692300] RIP: 0010:[<ffffffffc03def33>]  [<ffffffffc03def33>] dbuf_write.isra.17+0xd3/0x510 [zfs]
[ 1220.692301] RSP: 0018:ffffa430504ffa80  EFLAGS: 00010206
[ 1220.692301] RAX: ffff9dd2f3934c30 RBX: ffff9dd2d274f800 RCX: 0000000000000004
[ 1220.692302] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff9dd3a3112878
[ 1220.692302] RBP: ffffa430504ffb78 R08: 0000000000000000 R09: 0000000180200017
[ 1220.692302] R10: ffff9dd2d274f800 R11: 0000000000000000 R12: ffff9dd22890e618
[ 1220.692302] R13: ffff9dcf5bd10688 R14: ffff9dd3ad453c08 R15: ffff9dd39a342000
[ 1220.692303] FS:  0000000000000000(0000) GS:ffff9dd3efa00000(0000) knlGS:0000000000000000
[ 1220.692303] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1220.692304] CR2: 0000000000000018 CR3: 00000007e0214000 CR4: 00000000001406f0
[ 1220.692304] Stack:
[ 1220.692305]  ffff9dd22890f248 ffff9dd2f3934c30 0000000000000000 ffff9dd2f3934c30
[ 1220.692306]  ffffa430504ffac0 ffffffffc0401659 ffff9dd22890f248 ffff9dd22890f2c8
[ 1220.692306]  ffffa430504ffb78 ffffffffc03dfcad 0e8f73e6951da5e4 0000000000069a70
[ 1220.692307] Call Trace:
[ 1220.692316]  [<ffffffffc0401659>] ? dnode_rele+0x39/0x40 [zfs]
[ 1220.692324]  [<ffffffffc03dfcad>] ? dbuf_rele_and_unlock+0x46d/0x4b0 [zfs]
[ 1220.692334]  [<ffffffffc04a2325>] ? zio_buf_alloc+0x55/0x60 [zfs]
[ 1220.692335]  [<ffffffff9ed671e2>] ? mutex_lock+0x12/0x30
[ 1220.692337]  [<ffffffffc02e0f53>] ? spl_kmem_free+0x33/0x40 [spl]
[ 1220.692344]  [<ffffffffc03e2f1a>] dbuf_sync_leaf+0x13a/0x610 [zfs]
[ 1220.692351]  [<ffffffffc03d8272>] ? arc_alloc_buf+0xd2/0x100 [zfs]
[ 1220.692353]  [<ffffffff9e9cfe9a>] ? __kmalloc_node+0x1da/0x2a0
[ 1220.692360]  [<ffffffffc03dfabe>] ? dbuf_rele_and_unlock+0x27e/0x4b0 [zfs]
[ 1220.692361]  [<ffffffffc02e0ffb>] ? spl_kmem_alloc+0x9b/0x170 [spl]
[ 1220.692363]  [<ffffffffc02e0ffb>] ? spl_kmem_alloc+0x9b/0x170 [spl]
[ 1220.692370]  [<ffffffffc03e34ea>] dbuf_sync_list+0xfa/0x100 [zfs]
[ 1220.692380]  [<ffffffffc0403c3b>] dnode_sync+0x3bb/0x880 [zfs]
[ 1220.692389]  [<ffffffffc03ee5cb>] sync_dnodes_task+0x7b/0xb0 [zfs]
[ 1220.692390]  [<ffffffffc02e47f5>] taskq_thread+0x2b5/0x4e0 [spl]
[ 1220.692391]  [<ffffffff9e898d40>] ? wake_up_q+0x80/0x80
[ 1220.692393]  [<ffffffffc02e4540>] ? task_done+0xb0/0xb0 [spl]
[ 1220.692393]  [<ffffffff9e88e497>] kthread+0xd7/0xf0
[ 1220.692394]  [<ffffffff9e88e3c0>] ? kthread_park+0x60/0x60
[ 1220.692395]  [<ffffffff9ed6a155>] ret_from_fork+0x25/0x30
[ 1220.692404] Code: e8 93 a0 ff ff 41 0f b6 54 24 50 41 8b 8c 24 90 00 00 00 48 8b 85 58 ff ff ff 4c 3b 70 48 0f 84 41 02 00 00 49 8b b6 f0 00 00 00 <4c> 8b 76 18 49 8b 37 48 85 f6 0f 84 3d 02 00 00 48 8b b6 68 01 
[ 1220.692411] RIP  [<ffffffffc03def33>] dbuf_write.isra.17+0xd3/0x510 [zfs]
[ 1220.692411]  RSP <ffffa430504ffa80>
[ 1220.692412] CR2: 0000000000000018
[ 1220.692412] ---[ end trace 3f2c81671047d008 ]---
root@servi-nixos:~# cat /proc/spl/kstat/zfs/dbgmsg | grep 'HERE'
root@servi-nixos:~# 
  zfsUnstable = common {
    # comment/uncomment if breaking kernel versions are known
    incompatibleKernelVersion = null;

    # this package should point to a version / git revision compatible with the latest kernel release
    version = "2017-11-12";

    rev = "5277f208f290ea4e2204800a66c3ba20a03fe503";
    sha256 = "0hhlhv4g678j1w45813xfrk8zza0af59cdkmib9bkxy0cn0jsnd6";
    isUnstable = true;

    extraPatches = [
      (fetchpatch {
        url = "https://www.sjau.ch/zfs_mic.patch";
        sha256 = "1vq60s26pwi2vsk2amr9jhzcqbs4kzq0xfp8xw9gzp2h2gynpfj7";
      })
#      (fetchpatch {
#        url = "https://www.sjau.ch/zfs_new.patch";
#        sha256 = "0mrn5937sj7lglirdyfm0ikk7r1453i1r1ha7s74bzx8hpi8hb68";
#      })
#      (fetchpatch {
#        url = "https://www.sjau.ch/zfs_debug.patch";
#        sha256 = "00sns3c7wzgf8jxasvgkrax5k3m548wkxq6p7zvp149mrmy75dmm";
#      })
#      (fetchpatch {
#        url = "https://pastebin.com/raw/WyXSf4Mh";
#        sha256 = "0afn0ny9ali2n64w8a7883a6cq0fbn9zybzpsc87zf0sx68rwn9s";
#      })
      (fetchpatch {
        url = "https://pastebin.com/raw/D0AHpNvX";
        sha256 = "0b4lsjjmakvg8jrvvv36h4kyxd5hk979jcman2x8q3y6nnxxpnny";
      })
      (fetchpatch {
        url = "https://github.com/Mic92/zfs/compare/ded8f06a3cfee...nixos-zfs-2017-09-12.patch";
        sha256 = "033wf4jn0h0kp0h47ai98rywnkv5jwvf3xwym30phnaf8xxdx8aj";
      })
    ];

    spl = splUnstable;
  };
tcaputi commented 7 years ago

do I need to use the same version on the sending side?

You shouldn't no. This stack trace looks different than before. Can you post the whole thing (including the ASSERT, VERIFY, or BUG message that should be just above the trace you posted).

sjau commented 7 years ago

On the sending side I still use the same revision but without your debug and "pastebin" patches applied.

PMT_ in #zfs on freenode think, I should use same patches also on the sending side... should I try?

And here it is

   63.382374] systemd-journald[1740]: Failed to set ACL on /var/log/journal/cc00e28c9c9241ad97b93019da77628d/user-1000.journal, ignoring: Operation not supported
[   72.997967] wireguard: WireGuard 0.0.20171101 loaded. See www.wireguard.com for information.
[   72.997967] wireguard: Copyright (C) 2015-2017 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
[ 1220.686266] BUG: unable to handle kernel NULL pointer dereference at 0000000000000018
[ 1220.686308] IP: [<ffffffffc03def33>] dbuf_write.isra.17+0xd3/0x510 [zfs]
[ 1220.686362] PGD 0 

[ 1220.686380] Oops: 0000 [#1] SMP
[ 1220.686394] Modules linked in: wireguard(O) ip6_udp_tunnel udp_tunnel msr iptable_nat nf_nat_ipv4 nf_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack ip6t_rpfilter ipt_rpfilter ip6table_raw iptable_raw xt_pkttype nf_log_ipv6 nf_log_ipv4 nf_log_common xt_LOG xt_tcpudp x86_pkg_temp_thermal intel_powerclamp coretemp ext4 crc16 jbd2 crct10dif_pclmul fscrypto ip6table_filter mbcache ip6_tables crc32_pclmul iptable_filter ghash_clmulni_intel aesni_intel aes_x86_64 lrw gf128mul glue_helper ablk_helper snd_hda_codec_hdmi cryptd mac_hid snd_soc_ssm4567 i915 snd_soc_rt5640 iTCO_wdt snd_soc_rl6231 mxm_wmi evdev elan_i2c snd_soc_core intel_cstate snd_compress ac97_bus drm_kms_helper snd_pcm_dmaengine i2c_hid hid snd_hda_codec_realtek sdhci_acpi drm regmap_i2c intel_gtt
[ 1220.686725]  intel_uncore intel_rapl_perf agpgart i2c_algo_bit fb_sys_fops psmouse snd_hda_codec_generic sdhci syscopyarea serio_raw sysfillrect mmc_core i2c_designware_platform sysimgblt i2c_designware_core i2c_core led_class snd_hda_intel snd_hda_codec snd_hda_core mei_me snd_soc_sst_acpi snd_hwdep nuvoton_cir snd_soc_sst_match mei rc_core fjes battery gpio_lynxpoint tpm_tis dw_dmac tpm_tis_core tpm 8250_dw video intel_smartconnect lpc_ich wmi shpchp spi_pxa2xx_platform button acpi_pad snd_pcm_oss snd_mixer_oss snd_pcm snd_timer snd soundcore loop cpufreq_powersave kvm_intel kvm irqbypass ip_tables x_tables ipv6 crc_ccitt autofs4 sd_mod xhci_pci xhci_hcd ahci libahci libata ehci_pci ehci_hcd usbcore atkbd scsi_mod libps2 crc32c_intel usb_common i8042 rtc_cmos serio e1000e ptp pps_core af_packet dm_mod zfs(PO) zunicode(PO) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O)
[ 1220.687099] CPU: 6 PID: 373 Comm: dp_sync_taskq Tainted: P           O    4.9.61 #1-NixOS
[ 1220.687128] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./Z97 Pro4, BIOS P2.50 05/27/2016
[ 1220.687160] task: ffff9dd3c3b84240 task.stack: ffffa430504f4000
[ 1220.687182] RIP: 0010:[<ffffffffc03def33>]  [<ffffffffc03def33>] dbuf_write.isra.17+0xd3/0x510 [zfs]
[ 1220.687237] RSP: 0018:ffffa430504f7a80  EFLAGS: 00010206
[ 1220.687257] RAX: ffff9dced89e1dd0 RBX: ffff9dd0f86ce200 RCX: 0000000000000004
[ 1220.687282] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff9dd0a704e320
[ 1220.687308] RBP: ffffa430504f7b78 R08: 0000000000000000 R09: 000000018020001a
[ 1220.687330] R10: ffff9dd0f86ce200 R11: ffff9dd3c384dd00 R12: ffff9dd3b4c88888
[ 1220.687351] R13: ffff9dd055ff0aa8 R14: ffff9dd3ca53e270 R15: ffff9dd39a342000
[ 1220.687372] FS:  0000000000000000(0000) GS:ffff9dd3efb80000(0000) knlGS:0000000000000000
[ 1220.687401] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1220.687416] CR2: 0000000000000018 CR3: 000000042fe07000 CR4: 00000000001406e0
[ 1220.687434] Stack:
[ 1220.687443]  ffff9dd1e9871728 ffff9dced89e1dd0 0000000000000000 ffff9dced89e1dd0
[ 1220.687467]  ffffa430504f7ac0 ffffffffc0401659 ffff9dd1e9871728 ffff9dd1e98717a8
[ 1220.687491]  ffffa430504f7b78 ffffffffc03dfcad ffffa430504f7b30 0000000000069a70
[ 1220.687513] Call Trace:
[ 1220.687542]  [<ffffffffc0401659>] ? dnode_rele+0x39/0x40 [zfs]
[ 1220.687570]  [<ffffffffc03dfcad>] ? dbuf_rele_and_unlock+0x46d/0x4b0 [zfs]
[ 1220.687589]  [<ffffffffc02e0f53>] ? spl_kmem_free+0x33/0x40 [spl]
[ 1220.687619]  [<ffffffffc03e2f1a>] dbuf_sync_leaf+0x13a/0x610 [zfs]
[ 1220.687636]  [<ffffffff9ed6508a>] ? __schedule+0x19a/0x660
[ 1220.687662]  [<ffffffffc03dfabe>] ? dbuf_rele_and_unlock+0x27e/0x4b0 [zfs]
[ 1220.687681]  [<ffffffff9ed6577b>] ? _cond_resched+0x2b/0x40
[ 1220.687708]  [<ffffffffc03e34ea>] dbuf_sync_list+0xfa/0x100 [zfs]
[ 1220.687735]  [<ffffffffc0403c3b>] dnode_sync+0x3bb/0x880 [zfs]
[ 1220.687761]  [<ffffffffc03ee5cb>] sync_dnodes_task+0x7b/0xb0 [zfs]
[ 1220.687779]  [<ffffffffc02e47f5>] taskq_thread+0x2b5/0x4e0 [spl]
[ 1220.687796]  [<ffffffff9e898d40>] ? wake_up_q+0x80/0x80
[ 1220.687810]  [<ffffffffc02e4540>] ? task_done+0xb0/0xb0 [spl]
[ 1220.687823]  [<ffffffff9e88e497>] kthread+0xd7/0xf0
[ 1220.687840]  [<ffffffff9e88e3c0>] ? kthread_park+0x60/0x60
[ 1220.687855]  [<ffffffff9ed6a155>] ret_from_fork+0x25/0x30
[ 1220.687869] Code: e8 93 a0 ff ff 41 0f b6 54 24 50 41 8b 8c 24 90 00 00 00 48 8b 85 58 ff ff ff 4c 3b 70 48 0f 84 41 02 00 00 49 8b b6 f0 00 00 00 <4c> 8b 76 18 49 8b 37 48 85 f6 0f 84 3d 02 00 00 48 8b b6 68 01 
[ 1220.687984] RIP  [<ffffffffc03def33>] dbuf_write.isra.17+0xd3/0x510 [zfs]
[ 1220.688013]  RSP <ffffa430504f7a80>
[ 1220.688023] CR2: 0000000000000018
[ 1220.691989] ---[ end trace 3f2c81671047d006 ]---
[ 1220.691992] BUG: unable to handle kernel NULL pointer dereference at 0000000000000018
[ 1220.692015] IP: [<ffffffffc03def33>] dbuf_write.isra.17+0xd3/0x510 [zfs]
[ 1220.692017] PGD 0 

[ 1220.692018] Oops: 0000 [#2] SMP
[ 1220.692042] Modules linked in: wireguard(O) ip6_udp_tunnel udp_tunnel msr iptable_nat nf_nat_ipv4 nf_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack ip6t_rpfilter ipt_rpfilter ip6table_raw iptable_raw xt_pkttype nf_log_ipv6 nf_log_ipv4 nf_log_common xt_LOG xt_tcpudp x86_pkg_temp_thermal intel_powerclamp coretemp ext4 crc16 jbd2 crct10dif_pclmul fscrypto ip6table_filter mbcache ip6_tables crc32_pclmul iptable_filter ghash_clmulni_intel aesni_intel aes_x86_64 lrw gf128mul glue_helper ablk_helper snd_hda_codec_hdmi cryptd mac_hid snd_soc_ssm4567 i915 snd_soc_rt5640 iTCO_wdt snd_soc_rl6231 mxm_wmi evdev elan_i2c snd_soc_core intel_cstate snd_compress ac97_bus drm_kms_helper snd_pcm_dmaengine i2c_hid hid snd_hda_codec_realtek sdhci_acpi drm regmap_i2c intel_gtt
[ 1220.692068]  intel_uncore intel_rapl_perf agpgart i2c_algo_bit fb_sys_fops psmouse snd_hda_codec_generic sdhci syscopyarea serio_raw sysfillrect mmc_core i2c_designware_platform sysimgblt i2c_designware_core i2c_core led_class snd_hda_intel snd_hda_codec snd_hda_core mei_me snd_soc_sst_acpi snd_hwdep nuvoton_cir snd_soc_sst_match mei rc_core fjes battery gpio_lynxpoint tpm_tis dw_dmac tpm_tis_core tpm 8250_dw video intel_smartconnect lpc_ich wmi shpchp spi_pxa2xx_platform button acpi_pad snd_pcm_oss snd_mixer_oss snd_pcm snd_timer snd soundcore loop cpufreq_powersave kvm_intel kvm irqbypass ip_tables x_tables ipv6 crc_ccitt autofs4 sd_mod xhci_pci xhci_hcd ahci libahci libata ehci_pci ehci_hcd usbcore atkbd scsi_mod libps2 crc32c_intel usb_common i8042 rtc_cmos serio e1000e ptp pps_core af_packet dm_mod zfs(PO) zunicode(PO) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O)
[ 1220.692074] CPU: 4 PID: 371 Comm: dp_sync_taskq Tainted: P      D    O    4.9.61 #1-NixOS
[ 1220.692074] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./Z97 Pro4, BIOS P2.50 05/27/2016
[ 1220.692075] task: ffff9dd3c3b827c0 task.stack: ffffa430504e4000
[ 1220.692092] RIP: 0010:[<ffffffffc03def33>]  [<ffffffffc03def33>] dbuf_write.isra.17+0xd3/0x510 [zfs]
[ 1220.692093] RSP: 0018:ffffa430504e7a80  EFLAGS: 00010212
[ 1220.692094] RAX: ffff9dd2d0ec06a0 RBX: ffff9dd0181e3e00 RCX: 0000000000000004
[ 1220.692094] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff9dd3a6f10450
[ 1220.692095] RBP: ffffa430504e7b78 R08: 0000000000000000 R09: 0000000180200013
[ 1220.692095] R10: ffff9dd0181e3e00 R11: ffff9dd3c384ea00 R12: ffff9dd3ab0e43a8
[ 1220.692096] R13: ffff9dcf5bd106e0 R14: ffff9dd279160750 R15: ffff9dd39a342000
[ 1220.692097] FS:  0000000000000000(0000) GS:ffff9dd3efb00000(0000) knlGS:0000000000000000
[ 1220.692097] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1220.692098] CR2: 0000000000000018 CR3: 00000007ef8dc000 CR4: 00000000001406e0
[ 1220.692099] Stack:
[ 1220.692100]  ffff9dd2245de4e0 ffff9dd2d0ec06a0 0000000000000000 ffff9dd2d0ec06a0
[ 1220.692102]  ffffa430504e7ac0 ffffffffc0401659 ffff9dd2245de4e0 ffff9dd2245de560
[ 1220.692103]  ffffa430504e7b78 ffffffffc03dfcad ffffa430504e7b30 0000000000069a70
[ 1220.692103] Call Trace:
[ 1220.692120]  [<ffffffffc0401659>] ? dnode_rele+0x39/0x40 [zfs]
[ 1220.692133]  [<ffffffffc03dfcad>] ? dbuf_rele_and_unlock+0x46d/0x4b0 [zfs]
[ 1220.692136]  [<ffffffffc02e0f53>] ? spl_kmem_free+0x33/0x40 [spl]
[ 1220.692147]  [<ffffffffc03e2f1a>] dbuf_sync_leaf+0x13a/0x610 [zfs]
[ 1220.692150]  [<ffffffff9ed6508a>] ? __schedule+0x19a/0x660
[ 1220.692161]  [<ffffffffc03dfabe>] ? dbuf_rele_and_unlock+0x27e/0x4b0 [zfs]
[ 1220.692163]  [<ffffffff9ed6577b>] ? _cond_resched+0x2b/0x40
[ 1220.692174]  [<ffffffffc03e34ea>] dbuf_sync_list+0xfa/0x100 [zfs]
[ 1220.692187]  [<ffffffffc0403c3b>] dnode_sync+0x3bb/0x880 [zfs]
[ 1220.692200]  [<ffffffffc03ee5cb>] sync_dnodes_task+0x7b/0xb0 [zfs]
[ 1220.692203]  [<ffffffffc02e47f5>] taskq_thread+0x2b5/0x4e0 [spl]
[ 1220.692204]  [<ffffffff9e898d40>] ? wake_up_q+0x80/0x80
[ 1220.692207]  [<ffffffffc02e4540>] ? task_done+0xb0/0xb0 [spl]
[ 1220.692208]  [<ffffffff9e88e497>] kthread+0xd7/0xf0
[ 1220.692209]  [<ffffffff9e88e3c0>] ? kthread_park+0x60/0x60
[ 1220.692210]  [<ffffffff9ed6a155>] ret_from_fork+0x25/0x30
[ 1220.692226] Code: e8 93 a0 ff ff 41 0f b6 54 24 50 41 8b 8c 24 90 00 00 00 48 8b 85 58 ff ff ff 4c 3b 70 48 0f 84 41 02 00 00 49 8b b6 f0 00 00 00 <4c> 8b 76 18 49 8b 37 48 85 f6 0f 84 3d 02 00 00 48 8b b6 68 01 
[ 1220.692237] RIP  [<ffffffffc03def33>] dbuf_write.isra.17+0xd3/0x510 [zfs]
[ 1220.692237]  RSP <ffffa430504e7a80>
[ 1220.692238] CR2: 0000000000000018
[ 1220.692239] ---[ end trace 3f2c81671047d007 ]---
[ 1220.692240] BUG: unable to handle kernel NULL pointer dereference at 0000000000000018
[ 1220.692252] IP: [<ffffffffc03def33>] dbuf_write.isra.17+0xd3/0x510 [zfs]
[ 1220.692253] PGD 7e11bc067 
[ 1220.692253] PUD 7e089f067 
[ 1220.692254] PMD 0 

[ 1220.692255] Oops: 0000 [#3] SMP
[ 1220.692272] Modules linked in: wireguard(O) ip6_udp_tunnel udp_tunnel msr iptable_nat nf_nat_ipv4 nf_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack ip6t_rpfilter ipt_rpfilter ip6table_raw iptable_raw xt_pkttype nf_log_ipv6 nf_log_ipv4 nf_log_common xt_LOG xt_tcpudp x86_pkg_temp_thermal intel_powerclamp coretemp ext4 crc16 jbd2 crct10dif_pclmul fscrypto ip6table_filter mbcache ip6_tables crc32_pclmul iptable_filter ghash_clmulni_intel aesni_intel aes_x86_64 lrw gf128mul glue_helper ablk_helper snd_hda_codec_hdmi cryptd mac_hid snd_soc_ssm4567 i915 snd_soc_rt5640 iTCO_wdt snd_soc_rl6231 mxm_wmi evdev elan_i2c snd_soc_core intel_cstate snd_compress ac97_bus drm_kms_helper snd_pcm_dmaengine i2c_hid hid snd_hda_codec_realtek sdhci_acpi drm regmap_i2c intel_gtt
[ 1220.692288]  intel_uncore intel_rapl_perf agpgart i2c_algo_bit fb_sys_fops psmouse snd_hda_codec_generic sdhci syscopyarea serio_raw sysfillrect mmc_core i2c_designware_platform sysimgblt i2c_designware_core i2c_core led_class snd_hda_intel snd_hda_codec snd_hda_core mei_me snd_soc_sst_acpi snd_hwdep nuvoton_cir snd_soc_sst_match mei rc_core fjes battery gpio_lynxpoint tpm_tis dw_dmac tpm_tis_core tpm 8250_dw video intel_smartconnect lpc_ich wmi shpchp spi_pxa2xx_platform button acpi_pad snd_pcm_oss snd_mixer_oss snd_pcm snd_timer snd soundcore loop cpufreq_powersave kvm_intel kvm irqbypass ip_tables x_tables ipv6 crc_ccitt autofs4 sd_mod xhci_pci xhci_hcd ahci libahci libata ehci_pci ehci_hcd usbcore atkbd scsi_mod libps2 crc32c_intel usb_common i8042 rtc_cmos serio e1000e ptp pps_core af_packet dm_mod zfs(PO) zunicode(PO) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O)
[ 1220.692291] CPU: 0 PID: 374 Comm: dp_sync_taskq Tainted: P      D    O    4.9.61 #1-NixOS
[ 1220.692291] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./Z97 Pro4, BIOS P2.50 05/27/2016
[ 1220.692292] task: ffff9dd3c3b84f80 task.stack: ffffa430504fc000
[ 1220.692300] RIP: 0010:[<ffffffffc03def33>]  [<ffffffffc03def33>] dbuf_write.isra.17+0xd3/0x510 [zfs]
[ 1220.692301] RSP: 0018:ffffa430504ffa80  EFLAGS: 00010206
[ 1220.692301] RAX: ffff9dd2f3934c30 RBX: ffff9dd2d274f800 RCX: 0000000000000004
[ 1220.692302] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff9dd3a3112878
[ 1220.692302] RBP: ffffa430504ffb78 R08: 0000000000000000 R09: 0000000180200017
[ 1220.692302] R10: ffff9dd2d274f800 R11: 0000000000000000 R12: ffff9dd22890e618
[ 1220.692302] R13: ffff9dcf5bd10688 R14: ffff9dd3ad453c08 R15: ffff9dd39a342000
[ 1220.692303] FS:  0000000000000000(0000) GS:ffff9dd3efa00000(0000) knlGS:0000000000000000
[ 1220.692303] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1220.692304] CR2: 0000000000000018 CR3: 00000007e0214000 CR4: 00000000001406f0
[ 1220.692304] Stack:
[ 1220.692305]  ffff9dd22890f248 ffff9dd2f3934c30 0000000000000000 ffff9dd2f3934c30
[ 1220.692306]  ffffa430504ffac0 ffffffffc0401659 ffff9dd22890f248 ffff9dd22890f2c8
[ 1220.692306]  ffffa430504ffb78 ffffffffc03dfcad 0e8f73e6951da5e4 0000000000069a70
[ 1220.692307] Call Trace:
[ 1220.692316]  [<ffffffffc0401659>] ? dnode_rele+0x39/0x40 [zfs]
[ 1220.692324]  [<ffffffffc03dfcad>] ? dbuf_rele_and_unlock+0x46d/0x4b0 [zfs]
[ 1220.692334]  [<ffffffffc04a2325>] ? zio_buf_alloc+0x55/0x60 [zfs]
[ 1220.692335]  [<ffffffff9ed671e2>] ? mutex_lock+0x12/0x30
[ 1220.692337]  [<ffffffffc02e0f53>] ? spl_kmem_free+0x33/0x40 [spl]
[ 1220.692344]  [<ffffffffc03e2f1a>] dbuf_sync_leaf+0x13a/0x610 [zfs]
[ 1220.692351]  [<ffffffffc03d8272>] ? arc_alloc_buf+0xd2/0x100 [zfs]
[ 1220.692353]  [<ffffffff9e9cfe9a>] ? __kmalloc_node+0x1da/0x2a0
[ 1220.692360]  [<ffffffffc03dfabe>] ? dbuf_rele_and_unlock+0x27e/0x4b0 [zfs]
[ 1220.692361]  [<ffffffffc02e0ffb>] ? spl_kmem_alloc+0x9b/0x170 [spl]
[ 1220.692363]  [<ffffffffc02e0ffb>] ? spl_kmem_alloc+0x9b/0x170 [spl]
[ 1220.692370]  [<ffffffffc03e34ea>] dbuf_sync_list+0xfa/0x100 [zfs]
[ 1220.692380]  [<ffffffffc0403c3b>] dnode_sync+0x3bb/0x880 [zfs]
[ 1220.692389]  [<ffffffffc03ee5cb>] sync_dnodes_task+0x7b/0xb0 [zfs]
[ 1220.692390]  [<ffffffffc02e47f5>] taskq_thread+0x2b5/0x4e0 [spl]
[ 1220.692391]  [<ffffffff9e898d40>] ? wake_up_q+0x80/0x80
[ 1220.692393]  [<ffffffffc02e4540>] ? task_done+0xb0/0xb0 [spl]
[ 1220.692393]  [<ffffffff9e88e497>] kthread+0xd7/0xf0
[ 1220.692394]  [<ffffffff9e88e3c0>] ? kthread_park+0x60/0x60
[ 1220.692395]  [<ffffffff9ed6a155>] ret_from_fork+0x25/0x30
[ 1220.692404] Code: e8 93 a0 ff ff 41 0f b6 54 24 50 41 8b 8c 24 90 00 00 00 48 8b 85 58 ff ff ff 4c 3b 70 48 0f 84 41 02 00 00 49 8b b6 f0 00 00 00 <4c> 8b 76 18 49 8b 37 48 85 f6 0f 84 3d 02 00 00 48 8b b6 68 01 
[ 1220.692411] RIP  [<ffffffffc03def33>] dbuf_write.isra.17+0xd3/0x510 [zfs]
[ 1220.692411]  RSP <ffffa430504ffa80>
[ 1220.692412] CR2: 0000000000000018
[ 1220.692412] ---[ end trace 3f2c81671047d008 ]---
[ 1765.949457] systemd[1]: systemd-journald.service: State 'stop-sigabrt' timed out. Terminating.
[ 1856.193523] systemd[1]: systemd-journald.service: State 'stop-sigterm' timed out. Killing.
[ 1856.194962] systemd[1]: systemd-journald.service: Killing process 1740 (systemd-journal) with signal SIGKILL.
[ 1946.437598] systemd[1]: systemd-journald.service: Processes still around after SIGKILL. Ignoring.
[ 2036.681686] systemd[1]: systemd-journald.service: State 'stop-final-sigterm' timed out. Killing.
[ 2036.683152] systemd[1]: systemd-journald.service: Killing process 1740 (systemd-journal) with signal SIGKILL.
[ 2126.925691] systemd[1]: systemd-journald.service: Processes still around after final SIGKILL. Entering failed mode.
[ 2126.927261] systemd[1]: systemd-journald.service: Unit entered failed state.
[ 2126.928750] systemd[1]: systemd-journald.service: Failed with result 'watchdog'.
[ 2126.930330] systemd[1]: systemd-journald.service: Service has no hold-off time, scheduling restart.
[ 2126.932026] systemd[1]: Stopped Flush Journal to Persistent Storage.
[ 2126.933427] systemd[1]: Stopping Flush Journal to Persistent Storage...
[ 2126.934803] systemd[1]: Stopped Journal Service.
[ 2126.936468] systemd[1]: Starting Journal Service...
[ 2217.169415] systemd[1]: systemd-journald.service: Start operation timed out. Terminating.
[ 2307.413186] systemd[1]: systemd-journald.service: State 'stop-sigterm' timed out. Killing.
[ 2307.414426] systemd[1]: systemd-journald.service: Killing process 25834 (systemd-journal) with signal SIGKILL.
[ 2307.415649] systemd[1]: systemd-journald.service: Killing process 1740 (systemd-journal) with signal SIGKILL.
[ 2395.476526] systemd[1]: Starting ZFS auto-snapshotting every hour...
[ 2397.657008] systemd[1]: systemd-journald.service: Processes still around after SIGKILL. Ignoring.
[ 2427.371626] audit_printk_skb: 609 callbacks suppressed
[ 2427.371627] audit: type=1006 audit(1510948832.197:106): pid=26471 uid=0 old-auid=4294967295 auid=0 tty=(none) old-ses=4294967295 ses=6 res=1
[ 2427.398105] systemd[1]: Started Session 6 of user root.
[ 2487.900871] systemd[1]: systemd-journald.service: State 'stop-final-sigterm' timed out. Killing.
[ 2487.902077] systemd[1]: systemd-journald.service: Killing process 25834 (systemd-journal) with signal SIGKILL.
[ 2487.903293] systemd[1]: systemd-journald.service: Killing process 1740 (systemd-journal) with signal SIGKILL.
[ 2531.565515] audit: type=1006 audit(1510948936.399:107): pid=26724 uid=0 old-auid=4294967295 auid=0 tty=(none) old-ses=4294967295 ses=7 res=1
[ 2531.567907] systemd[1]: Started Session 7 of user root.
sjau commented 7 years ago

Is that what you're looking for?

tcaputi commented 7 years ago

Yes. Thats interesting this is a NULL pointer deference. So we may have actually solved the problem =, but inadvertently added another bug to the code. We're looking into it now.

sjau commented 7 years ago

Want me to test sending and receving with your patch?

tcaputi commented 7 years ago

No. That won't help. We're just trying to figure out where the Null pointer dereference is coming from....

You could try this, but it might not work dpepending on if NixOS is stripping your symbols.

First use this patch (replacing the last one I gave you): https://pastebin.com/qBfi3FaW. Then cause the crash again. This time, when you get the dump look for a line listing the IP. From the stack trace above, this would be IP: [<ffffffffc03def33>] dbuf_write.isra.17+0xd3/0x510 [zfs]. Hopefully, if all goes well, your function should be dbuf_write without the .isra.17 bit. However, it may be a completely different function too.

In either case, the part we're interested in is the part that looks like dbuf_write+0xd3. The number at the end will almost definitely be different, but you want the + bit without the numbers after the slash.

Once you have that, find the zfs.ko file and run gdb zfs.ko. This will open a gdb prompt. Type:

list *(<function name>+<number>)

and hit enter. It should print out some lines of code that you can paste back here. You can quit gdb by typing q. Thanks again.

sjau commented 7 years ago

There we go:

root@servi-nixos:~# dmesg > /tmp/dmesg
root@servi-nixos:~# grep dbuf_write /tmp/dmesg
[ 1413.935639] IP: [<ffffffffc03bbf33>] dbuf_write.isra.17+0xd3/0x510 [zfs]
[ 1413.936304] RIP: 0010:[<ffffffffc03bbf33>]  [<ffffffffc03bbf33>] dbuf_write.isra.17+0xd3/0x510 [zfs]
[ 1413.937824] RIP  [<ffffffffc03bbf33>] dbuf_write.isra.17+0xd3/0x510 [zfs]
[ 1413.943613] IP: [<ffffffffc03bbf33>] dbuf_write.isra.17+0xd3/0x510 [zfs]
[ 1413.943662] RIP: 0010:[<ffffffffc03bbf33>]  [<ffffffffc03bbf33>] dbuf_write.isra.17+0xd3/0x510 [zfs]
[ 1413.943777] RIP  [<ffffffffc03bbf33>] dbuf_write.isra.17+0xd3/0x510 [zfs]
[ 1413.943791] IP: [<ffffffffc03bbf33>] dbuf_write.isra.17+0xd3/0x510 [zfs]
[ 1413.943832] RIP: 0010:[<ffffffffc03bbf33>]  [<ffffffffc03bbf33>] dbuf_write.isra.17+0xd3/0x510 [zfs]
[ 1413.943927] RIP  [<ffffffffc03bbf33>] dbuf_write.isra.17+0xd3/0x510 [zfs]

There is nothing without the .isra.17 part....

Also, I can't find any zfs.ko file

root@servi-nixos:~# find /nix -name zfs.ko
root@servi-nixos:~# find / -name zfs.ko
root@servi-nixos:~# 
tcaputi commented 7 years ago

Hmmm. Its possible that NixOS is renaming the module for whatever reason. I think if you do modinfo zfs, the first line will tell you where the .ko file is. The fact that you still have the .isra lines is a bit disappointing. One other thing you can try is using the ./configure option --enable_debuginfo which should make that stop as well.

What exactly is the workload on this system? I've been running a script trying to reporduce the problem for days with no luck yet.

sjau commented 7 years ago

filename: /run/current-system/kernel-modules/lib/modules/4.9.61/extra/zfs/zfs.ko.xz

Workload: https://images.sjau.ch/img/7f430c69.png

Not sure what you mean by using the ./configure option to --enable_debuginfo

Should I add that configure option to zfs?

tcaputi commented 7 years ago

filename: /run/current-system/kernel-modules/lib/modules/4.9.61/extra/zfs/zfs.ko.xz

I guess your system uses compressed kernel modules for some reason. I haven't heard of that before. So you probably have to unzip it before running gdb on it.

Should I add that configure option to zfs?

Yes, if you know how to do that.

sjau commented 7 years ago

Should be as simple as

      configureFlags = [
        "--with-config=${configFile}"
        ] ++ optionals buildUser [
        "--with-dracutdir=$(out)/lib/dracut"
        "--with-udevdir=$(out)/lib/udev"
        "--with-systemdunitdir=$(out)/etc/systemd/system"
        "--with-systemdpresetdir=$(out)/etc/systemd/system-preset"
        "--with-mounthelperdir=$(out)/bin"
        "--sysconfdir=/etc"
        "--localstatedir=/var"
        "--enable-systemd"
        ] ++ optionals buildKernel [
        "--with-spl=${spl}/libexec/spl"
        "--with-linux=${kernel.dev}/lib/modules/${kernel.modDirVersion}/source"
        "--with-linux-obj=${kernel.dev}/lib/modules/${kernel.modDirVersion}/build"
        "--enable_debuginfo"
      ];