openzfs / zfs

OpenZFS on Linux and FreeBSD
https://openzfs.github.io/openzfs-docs
Other
10.31k stars 1.71k forks source link

PANIC at range_tree.c:368:range_tree_remove_impl() #11893

Closed olidal closed 1 year ago

olidal commented 3 years ago

System information

Type Version/Name
Distribution Name Debian/proxmox
Distribution Version 6.3.1
Linux Kernel 5.4.106-1-pve
Architecture amd64 (qemu)
ZFS Version 0.8.6-1
SPL Version 0.8.6-1

Describe the problem you're observing

After printing a panic message, all zfs operations are stuck, the machine needs to be rebooted. See below for panic message.

Note: This is NOT the zfs version that comes with the distribution. This is the locally recompiled zfs-0.8-release tagged version, but recompiled with NO error, using the default compilation settings.

Describe how to reproduce the problem

Hard to tell exactly when and why, but seems to happen under significant load, and most probably when trying to destroy snapshots. After a reboot, it happens again, not immediately, but unpredictably. The stack trace when it reproduces is identical.

I haven't tried yet locally compiled earlier versions, but it seems that earlier versions that came with proxmox kernel 5.4.78 (zfs version 0.8.5) didnt produce this error. (proxmox 6.1.1 or 6.2.1 fresh install without apt update I guess).

Note that it MAY have started after I had once to destroy 13k snapshots that had been accumulating in a dataset. But I don't see why it and I am not totally sure about the timing. Maybe completely unrelated.

Scrub after occurrence completed without error, and it happened again after scrubbing.

Note that most datasets are encrypted. This is a backup server, mostly receiving encrypted snapshots sent in raw replication differential streams. Source streams are also produced using a locally recompiled 0.8 version (I had to downgrade to this version because of another issue on ZFS version 2.0.X that I reported in a separate issue that totally brake our backup scripts).

Include any warning/errors/backtraces from the system logs

[63926.387058] VERIFY3(size != 0) failed (0 != 0)
[63926.387114] PANIC at range_tree.c:368:range_tree_remove_impl()
[63926.387144] Showing stack for process 602
[63926.387156] CPU: 2 PID: 602 Comm: z_wr_iss Tainted: P           O      5.4.106-1-pve #1
[63926.387158] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015
[63926.387163] Call Trace:
[63926.387188]  dump_stack+0x6d/0x8b
[63926.387227]  spl_dumpstack+0x29/0x2b [spl]
[63926.387235]  spl_panic+0xd3/0xfb [spl]
[63926.387241]  ? __blk_mq_try_issue_directly+0x177/0x1c0
[63926.387264]  ? fletcher_4_incremental_byteswap+0x130/0x130 [zcommon]
[63926.387333]  ? abd_iterate_func+0x7f/0x120 [zfs]
[63926.387338]  ? update_sd_lb_stats+0x104/0x790
[63926.387350]  ? avl_find+0x5f/0x90 [zavl]
[63926.387423]  range_tree_remove_impl+0x310/0x3d0 [zfs]
[63926.387426]  ? _cond_resched+0x19/0x30
[63926.387430]  ? __kmalloc_node+0x1e0/0x330
[63926.387500]  ? metaslab_df_alloc+0x131/0x1d0 [zfs]
[63926.387571]  range_tree_remove+0x10/0x20 [zfs]
[63926.387642]  metaslab_alloc_dva+0x273/0x1100 [zfs]
[63926.387714]  metaslab_alloc+0xb4/0x240 [zfs]
[63926.387790]  zio_dva_allocate+0xd2/0x810 [zfs]
[63926.387793]  ? _cond_resched+0x19/0x30
[63926.387796]  ? mutex_lock+0x12/0x30
[63926.387866]  ? metaslab_class_throttle_reserve+0xd8/0xf0 [zfs]
[63926.387875]  ? tsd_hash_search.isra.5+0x72/0xa0 [spl]
[63926.387883]  ? tsd_get_by_thread+0x2e/0x40 [spl]
[63926.387890]  ? taskq_member+0x18/0x30 [spl]
[63926.387967]  zio_execute+0x99/0xf0 [zfs]
[63926.387975]  taskq_thread+0x2ec/0x4d0 [spl]
[63926.387980]  ? wake_up_q+0x80/0x80
[63926.388057]  ? zio_taskq_member.isra.12.constprop.17+0x70/0x70 [zfs]
[63926.388061]  kthread+0x120/0x140
[63926.388070]  ? task_done+0xb0/0xb0 [spl]
[63926.388072]  ? kthread_park+0x90/0x90
[63926.388075]  ret_from_fork+0x35/0x40
olidal commented 3 years ago

I have been investigating a bit this issue and came with a quick and maybe not so dirty fix.

First, looking at the code, I was not able to track the code path the produces the trace above. Ie. I didn't find how and why metaslab_alloc_dva did end up calling range_tree_remove. I am not familiar with zfs module debugging, so I probably missed something, eg. maybe a context switch?

Anyway, looking more closely at the error, it happens when an ASSERT detects an attempt to free a zero-sized segment. The question is then: how potentially harmful is this error, and does it need an ASSERT such taht the whole system is blocked.

My impression is that the caller (which I was not able to identify), skipped a zero-size test and didnt prevent the useless call. So I believe it is harmless to just return without blocking everything.

As a temporary mitigation, I tested the following fix, which replaces the original ASSERT with a call to zfs_panic_recover and activated the zfs_recover option, such that ZFS operations are not fully interrupted as with assert.

/*
     * Turn this into a recoverable error.
     * Requesting a null size allocation is certainly wrong, but there is a chance it
     * only happens after a test was skipped somewhere that would have prevented to 
     * call this function.
     */
    /* VERIFY3U(size, !=, 0); */
    if ( size == 0 ) {
        zfs_panic_recover("zfs: freeing 0-sized segment "
                          "(offset=%llu size=%llu)",
                          (longlong_t)start, (longlong_t)size);
        return;
    }

As expected, when the situation occurs, I now have the following warning messages instead of the ASSERT induced PANIC

[14135.170214] WARNING: zfs: freeing 0-sized segment (offset=17179906048 size=0)
[14135.170231] WARNING: zfs: freeing 0-sized segment (offset=68719722496 size=0)

I think this fix may be acceptable because, the severity of the error does not seem to deserve such a strong response as a PANIC. Still it allows for tracking the issue since it still produces a trace in logs.

This fix is still in operation (an will be as long as no better is found), do not hesitate to request action for investigating further.

SpComb commented 3 years ago

Seeing the same or similar panic on ZoL 0.8.3 on the CentOS 8 linux 4.18.0-147.5.1.el8_1.x86_64 kernel. Happens during a zfs receive of a raw encrypted incremental snapshot, and it locks up all zfs receive processes in a D state with task z_wr_iss:5051 blocked for more than 120 seconds..

[6197068.751558] VERIFY3(size != 0) failed (0 != 0)
[6197068.751888] PANIC at range_tree.c:368:range_tree_remove_impl()
[6197068.752488] Showing stack for process 5053
[6197068.753098] CPU: 5 PID: 5053 Comm: z_wr_iss Kdump: loaded Tainted: P           OE    --------- -  - 4.18.0-147.5.1.el8_1.x86_64 #1
[6197068.753709] Hardware name: Hewlett-Packard HP Z230 Tower Workstation/1905, BIOS L51 v01.52 07/20/2015
[6197068.754294] Call Trace:
[6197068.754899]  dump_stack+0x5c/0x80
[6197068.755494]  spl_panic+0xd3/0xfb [spl]
[6197068.756084]  ? apic_timer_interrupt+0xa/0x20
[6197068.756727]  ? range_tree_seg_compare+0x9/0x30 [zfs]
[6197068.757318]  ? secpolicy_basic_link+0x10/0x10 [zfs]
[6197068.757849]  ? avl_find+0x58/0x90 [zavl]
[6197068.758449]  range_tree_remove_impl+0x2c4/0x370 [zfs]
[6197068.759022]  ? zio_checksum_template_init+0x120/0x120 [zfs]
[6197068.759592]  ? abd_fletcher_4_native+0x8c/0xb0 [zfs]
[6197068.760134]  ? _cond_resched+0x15/0x30
[6197068.760706]  ? __kmalloc_node+0x1d8/0x2b0
[6197068.761289]  ? metaslab_df_alloc+0x127/0x1c0 [zfs]
[6197068.761851]  metaslab_group_alloc_normal.isra.19+0x348/0x970 [zfs]
[6197068.762414]  metaslab_alloc_dva+0x17b/0x730 [zfs]
[6197068.762962]  metaslab_alloc+0xb7/0x240 [zfs]
[6197068.763507]  zio_dva_allocate+0xcd/0x760 [zfs]
[6197068.764024]  ? _cond_resched+0x15/0x30
[6197068.764561]  ? mutex_lock+0xe/0x30
[6197068.765120]  ? metaslab_class_throttle_reserve+0xc6/0xf0 [zfs]
[6197068.765636]  ? tsd_hash_search.isra.3+0x42/0x90 [spl]
[6197068.766251]  ? tsd_get_by_thread+0x2a/0x40 [spl]
[6197068.766799]  zio_execute+0x90/0xf0 [zfs]
[6197068.767296]  taskq_thread+0x2e5/0x530 [spl]
[6197068.767811]  ? wake_up_q+0x70/0x70
[6197068.768344]  ? zio_taskq_member.isra.8.constprop.13+0x70/0x70 [zfs]
[6197068.768833]  ? taskq_thread_spawn+0x50/0x50 [spl]
[6197068.769342]  kthread+0x112/0x130
[6197068.769849]  ? kthread_flush_work_fn+0x10/0x10
[6197068.770361]  ret_from_fork+0x35/0x40
olidal commented 3 years ago

I have implemented and deployed the small hack in my comment above and tested it for a few days. Note this not a fix, it does not solve the cause of this issue, that should still be considered a bug.

After this hack, I now have WARNINGs in logs instead of halting all ZFS operations.

Here is a log of the WARNINGs a got in the recent days (with the small patch above). It shows that the error comes in pairs, which seems to be consistent with my default setting of copies=1.

# dmesg -e | grep zfs
[Apr21 15:51] WARNING: zfs: freeing 0-sized segment (offset=618475339776 size=0)
[  +0.000012] WARNING: zfs: freeing 0-sized segment (offset=1013612331008 size=0)
[Apr21 19:41] WARNING: zfs: freeing 0-sized segment (offset=1219770712064 size=0)
[  +0.000014] WARNING: zfs: freeing 0-sized segment (offset=1340029796352 size=0)
[Apr23 04:46] WARNING: zfs: freeing 0-sized segment (offset=2199023505408 size=0)
[  +0.000013] WARNING: zfs: freeing 0-sized segment (offset=618475290624 size=0)
[Apr23 10:11] WARNING: zfs: freeing 0-sized segment (offset=2130303778816 size=0)
[  +0.000012] WARNING: zfs: freeing 0-sized segment (offset=2164663517184 size=0)
[Apr23 15:51] WARNING: zfs: freeing 0-sized segment (offset=2130303778816 size=0)
[  +0.000014] WARNING: zfs: freeing 0-sized segment (offset=2164663517184 size=0)
[Apr23 16:46] WARNING: zfs: freeing 0-sized segment (offset=618475290624 size=0)
[  +0.000015] WARNING: zfs: freeing 0-sized segment (offset=1889785806848 size=0)
[Apr24 18:16] WARNING: zfs: freeing 0-sized segment (offset=2336462209024 size=0)
[  +0.000013] WARNING: zfs: freeing 0-sized segment (offset=2302102470656 size=0)
[Apr24 21:26] WARNING: zfs: freeing 0-sized segment (offset=2405181685760 size=0)
[  +0.000023] WARNING: zfs: freeing 0-sized segment (offset=2164663517184 size=0)
[Apr25 08:16] WARNING: zfs: freeing 0-sized segment (offset=2903397892096 size=0)
[  +0.000018] WARNING: zfs: freeing 0-sized segment (offset=2370821947392 size=0)
[Apr25 09:56] WARNING: zfs: freeing 0-sized segment (offset=2800318676992 size=0)
[  +0.000074] WARNING: zfs: freeing 0-sized segment (offset=2834678841344 size=0)
[Apr25 14:15] WARNING: zfs: freeing 0-sized segment (offset=2920577925120 size=0)
[  +0.000014] WARNING: zfs: freeing 0-sized segment (offset=2989297238016 size=0)
[Apr25 21:31] WARNING: zfs: freeing 0-sized segment (offset=2869038153728 size=0)
[  +0.000013] WARNING: zfs: freeing 0-sized segment (offset=2765958938624 size=0)
[Apr25 22:26] WARNING: zfs: freeing 0-sized segment (offset=2869038153728 size=0)
[  +0.000020] WARNING: zfs: freeing 0-sized segment (offset=2765958938624 size=0)
[Apr25 23:25] WARNING: zfs: freeing 0-sized segment (offset=2869038153728 size=0)
[  +0.000015] WARNING: zfs: freeing 0-sized segment (offset=2765958938624 size=0)
[Apr26 08:10] WARNING: zfs: freeing 0-sized segment (offset=3075196583936 size=0)
[  +0.000021] WARNING: zfs: freeing 0-sized segment (offset=3401614327808 size=0)
[Apr26 12:31] WARNING: zfs: freeing 0-sized segment (offset=3539053117440 size=0)
[  +0.000014] WARNING: zfs: freeing 0-sized segment (offset=3332895277056 size=0)
[Apr27 01:36] WARNING: zfs: freeing 0-sized segment (offset=3075196829696 size=0)
[  +0.000015] WARNING: zfs: freeing 0-sized segment (offset=3401614327808 size=0)
[Apr27 09:11] WARNING: zfs: freeing 0-sized segment (offset=188978839552 size=0)
[  +0.000015] WARNING: zfs: freeing 0-sized segment (offset=3624952397824 size=0)

Worth to mention: My zfs runs in a vm and uses only a single vdev, so the 2 copies will necessarily end up on the same (virtual) device (I know this is not the best use of ZFS, but the virtualized environment seems to be the only way for receiving snapshots from multiple sources in properly jailed environments. )

spmfox commented 2 years ago

Hello, hitting this issue too - twice a month or more. CentOS Linux release 7.9.2009 (Core) Kernel 3.10.0 ZFS 2.0.5 zfs send -w -R -I / zfs receive -Fu send+receive raw encrypted incremental datasets on the same machine triggers this sometimes:

[2070639.496582] VERIFY3(size != 0) failed (0 != 0)
[2070639.496591] PANIC at range_tree.c:437:range_tree_remove_impl()
[2070639.496596] Showing stack for process 1196
[2070639.496602] CPU: 10 PID: 1196 Comm: z_wr_iss Kdump: loaded Tainted: P           OE  ------------   3.10.0-1160.36.2.el7.x86_64 #1
[2070639.496605] Hardware name: System manufacturer System Product Name/ROG STRIX X370-F GAMING, BIOS 4012 04/20/2018
[2070639.496607] Call Trace:
[2070639.496618]  [<ffffffff9a383559>] dump_stack+0x19/0x1b
[2070639.496651]  [<ffffffffc0818c5b>] spl_dumpstack+0x2b/0x30 [spl]
[2070639.496665]  [<ffffffffc0818d29>] spl_panic+0xc9/0x110 [spl]
[2070639.496721]  [<ffffffffc104d65d>] ? zfs_btree_insert_leaf_impl.isra.10+0x4d/0x60 [zfs]
[2070639.496773]  [<ffffffffc104dd73>] ? zfs_btree_insert_into_leaf+0x193/0x220 [zfs]
[2070639.496779]  [<ffffffff99e27de9>] ? ___slab_alloc+0x229/0x520
[2070639.496831]  [<ffffffffc104cef8>] ? zfs_btree_find_parent_idx+0x88/0x100 [zfs]
[2070639.496883]  [<ffffffffc104e5e4>] ? zfs_btree_find+0x1d4/0x370 [zfs]
[2070639.496963]  [<ffffffffc10c2e04>] range_tree_remove_impl+0xe24/0xf30 [zfs]
[2070639.497038]  [<ffffffffc10b75c9>] ? metaslab_df_alloc+0x399/0x5c0 [zfs]
[2070639.497112]  [<ffffffffc10bfbcb>] ? multilist_sublist_unlock+0x2b/0x40 [zfs]
[2070639.497185]  [<ffffffffc10c2f40>] range_tree_remove+0x10/0x20 [zfs]
[2070639.497260]  [<ffffffffc10b97a5>] metaslab_group_alloc_normal.isra.23+0x775/0x980 [zfs]
[2070639.497343]  [<ffffffffc10bafe1>] metaslab_alloc_dva+0x1b1/0x6c0 [zfs]
[2070639.497415]  [<ffffffffc10bc097>] metaslab_alloc+0xc7/0x250 [zfs]
[2070639.497506]  [<ffffffffc114a036>] zio_dva_allocate+0xf6/0x8f0 [zfs]
[2070639.497583]  [<ffffffffc117538f>] ? zio_crypt_encode_params_bp+0x6f/0xa0 [zfs]
[2070639.497665]  [<ffffffffc114593a>] ? zio_encrypt+0x19a/0x680 [zfs]
[2070639.497670]  [<ffffffff9a3875d2>] ? mutex_lock+0x12/0x2f
[2070639.497685]  [<ffffffffc081fbe2>] ? tsd_hash_search.isra.0+0x72/0xd0 [spl]
[2070639.497764]  [<ffffffffc114520f>] zio_execute+0x9f/0x100 [zfs]
[2070639.497779]  [<ffffffffc081e4f6>] taskq_thread+0x2c6/0x520 [spl]
[2070639.497785]  [<ffffffff99cdadd0>] ? wake_up_state+0x20/0x20
[2070639.497862]  [<ffffffffc1145170>] ? zio_taskq_member.isra.10.constprop.13+0x70/0x70 [zfs]
[2070639.497878]  [<ffffffffc081e230>] ? taskq_thread_spawn+0x60/0x60 [spl]
[2070639.497883]  [<ffffffff99cc5e41>] kthread+0xd1/0xe0
[2070639.497887]  [<ffffffff99cc5d70>] ? insert_kthread_work+0x40/0x40
[2070639.497893]  [<ffffffff9a395de4>] ret_from_fork_nospec_begin+0xe/0x21
[2070639.497897]  [<ffffffff99cc5d70>] ? insert_kthread_work+0x40/0x40

Scrubs are clean, the entire zfs system doesn't appear to be locked up certain operations involving the pool do not work, such as a rollback on the receiving pool is hung. However a send job from the original sending pool is working just fine to another off-site receiving pool.

Edit: After a bit more testing, it looks like ZFS operations such as create, destroy, rollback, etc do not work on the local receiving pool. Local sending pool is acting normal. So I assume whatever is stuck here, its stuck on the receive side.

Edit2: Looks like the receive process is still running, 100% CPU. Strace shows nothing at all on the process.

Edit3: Last night I was able to reproduce this three times in a row, same send/receive job caused a panic. This morning I manually sent the remaining datasets to the receiving pool one at a time (instead of the recursive send) and they all went okay. Created a new recursive snapshot to try the entire job again and this time it sent normally. I'm not sure what that means, if there was an issue with the data you'd think that I would have hit it running the datasets one at a time but that didn't happen. Whatever this condition was, it persisted across reboots and it would panic at roughly the same time after I started the recursive send/receive job.

nahall commented 2 years ago

I just hit this same problem with a zfs recv. Running proxmox 7.0, zfs 2.0.5.

devyn commented 2 years ago

I am having this issue on zfs 2.0.6 on ubuntu 21.10, with receiving encrypted raw datasets.

Derkades commented 2 years ago

I don't know enough to conclude whether it is this exact one, but today while using syncoid I experienced a similar issue. Debian 11, zfs-dkms version 2.0.3-9. Hopefully this helps.

dmesg ``` [30354.886726] VERIFY3(size != 0) failed (0 != 0) [30354.886763] PANIC at range_tree.c:437:range_tree_remove_impl() [30354.886784] Showing stack for process 544 [30354.886805] CPU: 2 PID: 544 Comm: z_wr_iss Tainted: P S OE 5.10.0-10-amd64 #1 Debian 5.10.84-1 [30354.886836] Hardware name: Dell Inc. Inspiron 3537/03JPPR, BIOS A10 01/24/2018 [30354.886866] Call Trace: [30354.886883] dump_stack+0x6b/0x83 [30354.886909] spl_panic+0xd4/0xfc [spl] [30354.886932] ? scsi_queue_rq+0x31f/0xa20 [scsi_mod] [30354.886952] ? blk_mq_dispatch_rq_list+0x11e/0x7b0 [30354.887049] ? abd_iter_init+0x38/0x70 [zfs] [30354.887133] ? abd_iter_advance+0x29/0x60 [zfs] [30354.887190] ? abd_iterate_func+0xd6/0x1c0 [zfs] [30354.887215] ? fletcher_4_incremental_native+0x160/0x160 [zcommon] [30354.887286] ? metaslab_rangesize64_compare+0x40/0x40 [zfs] [30354.887344] ? zfs_btree_find+0x175/0x300 [zfs] [30354.887411] range_tree_remove_impl+0x711/0xf10 [zfs] [30354.887479] ? metaslab_df_alloc+0xc4/0x5b0 [zfs] [30354.887501] ? _cond_resched+0x16/0x40 [30354.888335] ? recalibrate_cpu_khz+0x10/0x10 [30354.889123] ? multilist_sublist_insert_tail+0x2b/0x50 [zfs] [30354.889899] metaslab_alloc_dva+0x24c/0x1210 [zfs] [30354.890610] ? _cond_resched+0x16/0x40 [30354.891339] metaslab_alloc+0xbe/0x250 [zfs] [30354.892066] zio_dva_allocate+0xd4/0x800 [zfs] [30354.892953] ? _cond_resched+0x16/0x40 [30354.893619] ? mutex_lock+0xe/0x30 [30354.894327] ? metaslab_class_throttle_reserve+0xc3/0xe0 [zfs] [30354.895050] ? zio_io_to_allocate+0x60/0x80 [zfs] [30354.895769] zio_execute+0x81/0x120 [zfs] [30354.896441] taskq_thread+0x2da/0x520 [spl] [30354.897097] ? wake_up_q+0xa0/0xa0 [30354.897807] ? zio_destroy+0xf0/0xf0 [zfs] [30354.898620] ? taskq_thread_spawn+0x50/0x50 [spl] [30354.899396] kthread+0x11b/0x140 [30354.900037] ? __kthread_bind_mask+0x60/0x60 [30354.900677] ret_from_fork+0x22/0x30 ```

Looks like the receive process is still running, 100% CPU. Strace shows nothing at all on the process.

I was running two sync processes at once and indeed 2 cores are at 100% iowait

mafredri commented 2 years ago

I woke up to the same/similar issue this morning on Debian Bullseye running Linux 5.15.0-0.bpo.3-amd64 and ZFS 2.1.4-1~bpo11+1.

 kernel:[377052.452950] VERIFY3(size != 0) failed (0 != 0)
 kernel:[377052.457965] PANIC at range_tree.c:438:range_tree_remove_impl()

Unfortunately, I didn't have a serial console hooked up so no call trace :(. I do believe replication was in process when it happened. Anywhere between 1-4 datasets were probably being recvd concurrently on this machine via zrepl when it happened.

olidal commented 2 years ago

I had a lot of those at one point when I was using zfs in proxmox LXC containers and I did not pay attention to the fact that zfsutils-linux package in CTs did not match exactly the zfs kernel module in HV. I was convinced that the ioctl protocol between the zfs user-land command and kernel module would be able to detect and deal with such mismatch, but on contrary it seems very sensitive.

I first fixed it the hard way, by selecting and freezing a ZFS version by recompiling the whole zfs project by hand in the proxmox HV, and deploying the libs and user-land commands manually in CTs using bind mounts. It worked well, was very stable, but I was not happy with idea of not following ZFS updates. I now reverted using the mainstream version and proxmox updates, which I manage to keep always identical in CTs and HV (using unattended-upgrades), and have not experienced the issue since then.

Although I now have another issue which I believe is also related to version mismatch: I have a RAIDZ1 pool suspended without any reported I/O fault occurring on vdevs, apparently due to to a txg timeout. I think it may happen following automatic updates with delayed reboot (Ie zfs commands and libs get updated, but ZFS kernel is still using the old module version until next reboot, so we end up with a version mismatch between user-land and kernel). A reboot fixes it up, but I still have to find a way to hook an automatic reboot when zfs package is updated.

Olivier Dalle Co-Fondateur Inspeere

+33 (0) 603-92-19-14 <tel:+33 (0) 603-92-19-14> @. @.> https://inspeere.com https://inspeere.com/ 24 boulevard du Grand Cerf, 86000 Poitiers (FR) https://www.linkedin.com/in/olivierdalle/

On 24 May 2022, at 23:08, Mathias Fredriksson @.***> wrote:

I woke up to the same/similar issue this morning on Debian Bullseye running Linux 5.15.0-0.bpo.3-amd64 and ZFS 2.1.4-1~bpo11+1.

kernel:[377052.452950] VERIFY3(size != 0) failed (0 != 0) kernel:[377052.457965] PANIC at range_tree.c:438:range_tree_remove_impl() Unfortunately, I didn't have a serial console hooked up so no call trace :(. I do believe replication was in process when it happened. Anywhere between 1-4 datasets were probably being recvd concurrently on this machine via zrepl when it happened.

— Reply to this email directly, view it on GitHub https://github.com/openzfs/zfs/issues/11893#issuecomment-1136434101, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOPCEE7KEOAOFZ5SVG5JES3VLVANTANCNFSM4224KIZA. You are receiving this because you authored the thread.

Luflosi commented 1 year ago

I just ran into what I believe to be the same issue while sending an encrypted dataset to a different ZFS pool on the same machine:

[Aug 6 12:04] VERIFY3(size != 0) failed (0 != 0)
[  +0.000028] PANIC at range_tree.c:438:range_tree_remove_impl()
[  +0.000016] Showing stack for process 168596
[  +0.000002] CPU: 2 PID: 168596 Comm: z_wr_iss Tainted: P          IO      5.18.16 #1-NixOS
[  +0.000004] Hardware name: Dell Inc. PowerEdge R610/0F0XJ6, BIOS 6.6.0 05/22/2018
[  +0.000002] Call Trace:
[  +0.000004]  <TASK>
[  +0.000004]  dump_stack_lvl+0x45/0x5e
[  +0.000011]  spl_panic+0xd1/0xe9 [spl]
[  +0.000016]  ? zfs_btree_insert_into_leaf+0x232/0x2a0 [zfs]
[  +0.000124]  ? pn_free+0x30/0x30 [zfs]
[  +0.000124]  ? zfs_btree_find_parent_idx+0x72/0xd0 [zfs]
[  +0.000092]  ? metaslab_rangesize64_compare+0x40/0x40 [zfs]
[  +0.000122]  ? zfs_btree_find+0x175/0x300 [zfs]
[  +0.000094]  range_tree_remove_impl+0xa97/0xef0 [zfs]
[  +0.000125]  ? metaslab_df_alloc+0xc4/0x5c0 [zfs]
[  +0.000123]  ? recalibrate_cpu_khz+0x10/0x10
[  +0.000006]  ? multilist_sublist_insert_tail+0x2b/0x50 [zfs]
[  +0.000126]  metaslab_alloc_dva+0xbd2/0x1490 [zfs]
[  +0.000126]  ? preempt_count_add+0x70/0xa0
[  +0.000006]  metaslab_alloc+0xd3/0x280 [zfs]
[  +0.000125]  zio_dva_allocate+0xd3/0x900 [zfs]
[  +0.000141]  ? zio_compress_data+0x39/0x100 [zfs]
[  +0.000139]  ? zio_encrypt+0x4eb/0x710 [zfs]
[  +0.000140]  ? preempt_count_add+0x70/0xa0
[  +0.000004]  ? _raw_spin_lock+0x13/0x40
[  +0.000003]  zio_execute+0x83/0x120 [zfs]
[  +0.000141]  taskq_thread+0x2cf/0x500 [spl]
[  +0.000016]  ? wake_up_q+0x90/0x90
[  +0.000005]  ? zio_gang_tree_free+0x70/0x70 [zfs]
[  +0.000141]  ? taskq_thread_spawn+0x60/0x60 [spl]
[  +0.000011]  kthread+0xe8/0x110
[  +0.000005]  ? kthread_complete_and_exit+0x20/0x20
[  +0.000005]  ret_from_fork+0x22/0x30
[  +0.000006]  </TASK>

This happened multiple times before. The OS runs on bare metal, not in a VM and I haven't updated my ZFS userspace tools since the last reboot yesterday:

$ zfs --version
zfs-2.1.5-1
zfs-kmod-2.1.5-1
thefloweringash commented 1 year ago

I think I started seeing this after I configured automatic sends of an encrypted dataset to another pool on the same machine. I tried to enable debugging via --enable-debug, and the failure changed from

VERIFY3(size != 0) failed (0 != 0)
PANIC at range_tree.c:438:range_tree_remove_impl()

to

VERIFY3(zio->io_size == BP_GET_PSIZE(bp)) failed (0 == 16384)
PANIC at zio.c:3505:zio_dva_allocate()

Unfortunately I'm quite unfamiliar with the internals of zfs, but I'm happy to run with patches that add more logging or debugging and report back.


❯ zfs --version
zfs-2.1.5-1
zfs-kmod-2.1.5-1
Example stack trace without debugging ``` VERIFY3(size != 0) failed (0 != 0) PANIC at range_tree.c:438:range_tree_remove_impl() Showing stack for process 212670 CPU: 0 PID: 212670 Comm: z_wr_iss Tainted: P O 5.15.61 #1-NixOS Hardware name: Supermicro X9SCL-II/X9SCM-II/X9SCL-II/X9SCM-II, BIOS 2.3a 01/06/2021 Call Trace: dump_stack_lvl+0x46/0x5e spl_panic+0xd1/0xe9 [spl] ? fletcher_4_sse2_native+0x18/0xa0 [zcommon] ? abd_fletcher_4_iter+0x67/0xc0 [zcommon] ? zfs_btree_insert_into_leaf+0x232/0x2a0 [zfs] ? abd_iterate_func.part.0+0xdf/0x1c0 [zfs] ? fletcher_4_incremental_native+0x160/0x160 [zcommon] ? pn_free+0x30/0x30 [zfs] ? zfs_btree_find_parent_idx+0x72/0xd0 [zfs] ? metaslab_rangesize64_compare+0x40/0x40 [zfs] ? zfs_btree_find+0x175/0x300 [zfs] range_tree_remove_impl+0xa97/0xef0 [zfs] ? metaslab_df_alloc+0xc4/0x5c0 [zfs] ? __cond_resched+0x16/0x50 ? recalibrate_cpu_khz+0x10/0x10 ? multilist_sublist_insert_tail+0x2b/0x50 [zfs] metaslab_alloc_dva+0xb6e/0x13f0 [zfs] ? __cond_resched+0x16/0x50 metaslab_alloc+0xd3/0x280 [zfs] zio_dva_allocate+0xd3/0x900 [zfs] ? zio_compress_data+0x39/0x100 [zfs] ? zio_encrypt+0x4eb/0x710 [zfs] ? zio_io_to_allocate+0x63/0x90 [zfs] zio_execute+0x83/0x120 [zfs] taskq_thread+0x2cf/0x500 [spl] ? wake_up_q+0x90/0x90 ? zio_gang_tree_free+0x70/0x70 [zfs] ? taskq_thread_spawn+0x60/0x60 [spl] kthread+0x127/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x22/0x30 ```
Example stack trace with debugging ``` VERIFY3(zio->io_size == BP_GET_PSIZE(bp)) failed (0 == 16384) PANIC at zio.c:3505:zio_dva_allocate() Showing stack for process 2947062 CPU: 7 PID: 2947062 Comm: z_wr_iss Tainted: P O 5.15.64 #1-NixOS Hardware name: Supermicro X9SCL-II/X9SCM-II/X9SCL-II/X9SCM-II, BIOS 2.3a 01/06/2021 Call Trace: dump_stack_lvl+0x46/0x5e spl_panic+0xd1/0xe9 [spl] ? spa_has_pending_synctask+0x1c/0x50 [zfs] ? vdev_queue_io_to_issue+0x4be/0x1010 [zfs] ? __cond_resched+0x16/0x50 ? kmem_cache_alloc+0x3c/0x3c0 ? spl_kmem_cache_alloc+0xc6/0xc50 [spl] ? zio_checksum_templates_free+0x80/0x80 [zfs] zio_dva_allocate+0x87d/0xcf0 [zfs] ? mutex_lock+0xe/0x30 ? zfs_refcount_add_many+0xdc/0x150 [zfs] ? zio_io_to_allocate+0xa2/0x190 [zfs] zio_execute+0xde/0x2d0 [zfs] taskq_thread+0x302/0x610 [spl] ? wake_up_q+0x90/0x90 ? zio_gang_tree_free+0xb0/0xb0 [zfs] ? taskq_thread_spawn+0x60/0x60 [spl] kthread+0x127/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x22/0x30 ```
thefloweringash commented 1 year ago

I patched in a WARN_ON_ONCE(psize == 0); in to zio_write, and it triggered shortly before the panic.

Full trace ``` [157271.055093] ------------[ cut here ]------------ [157271.055096] WARNING: CPU: 1 PID: 1526 at /build/source/module/zfs/zio.c:1139 zio_write+0x1e9/0x1f0 [zfs] [157271.055247] Modules linked in: cfg80211 rfkill 8021q intel_rapl_msr intel_rapl_common ip6_tables xt_conntrack nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip6t_rpfilter x86_pkg_temp_thermal ipt_rpfilter intel_powerclamp xt_pkttype xt_LOG nf_log_syslog ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 nls_iso8859_1 nls_cp437 xt_tcpudp nft_compat coretemp vfat iTCO_wdt crc32_pclmul intel_pmc_bxt ghash_clmulni_intel fat at24 watchdog regmap_i2c aesni_intel ipmi_ssif nft_counter libaes mgag200 crypto_simd cryptd drm_kms_helper rapl nf_tables deflate evdev joydev i2c_algo_bit intel_cstate libcrc32c input_leds crc32c_generic intel_uncore fb_sys_fops efi_pstore led_class acpi_ipmi mousedev syscopyarea nfnetlink sysfillrect sysimgblt sch_fq_codel mac_hid i2c_i801 ipmi_si mei_me ie31200_edac mei ipmi_devintf lpc_ich i2c_smbus mfd_core edac_core ipmi_msghandler tiny_power_button video fan button thermal atkbd libps2 serio loop tun tap macvlan bridge stp llc kvm_intel kvm drm irqbypass pstore [157271.055323] agpgart fuse backlight i2c_core configfs efivarfs ip_tables x_tables autofs4 hid_generic usbhid hid sd_mod t10_pi crc_t10dif crct10dif_generic ahci libahci libata ehci_pci ehci_hcd e1000e usbcore scsi_mod crct10dif_pclmul crct10dif_common crc32c_intel ptp scsi_common pps_core usb_common rtc_cmos af_packet dm_mod zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) [157271.055354] CPU: 1 PID: 1526 Comm: txg_sync Tainted: P O 5.15.67 #1-NixOS [157271.055357] Hardware name: Supermicro X9SCL-II/X9SCM-II/X9SCL-II/X9SCM-II, BIOS 2.3a 01/06/2021 [157271.055359] RIP: 0010:zio_write+0x1e9/0x1f0 [zfs] [157271.055457] Code: 48 38 85 c9 74 1b 48 c7 40 34 00 00 00 00 48 83 c4 10 5b 5d 41 5c 41 5d 41 5e 41 5f c3 cc cc cc cc 8b 50 40 85 d2 74 b3 eb dc <0f> 0b e9 c1 fe ff ff 0f 1f 44 00 00 68 f4 00 78 01 6a 01 ff 74 24 [157271.055458] RSP: 0018:ffffae58c994b418 EFLAGS: 00010246 [157271.055460] RAX: 0000000000000003 RBX: ffffae58c994b508 RCX: ffff8a5a37cc7c50 [157271.055461] RDX: 0000000000000002 RSI: ffff8a58d3ba8000 RDI: ffff8a58d3ba8000 [157271.055462] RBP: ffff8a5b53601e08 R08: ffff8a5b53601e08 R09: 0000000000004000 [157271.055463] R10: ffffae58c994b5b0 R11: 0000000000000000 R12: ffff8a58d3ba8000 [157271.055464] R13: ffff8a5dab1f9380 R14: ffff8a5a37cc7c50 R15: 000000000080b8fe [157271.055465] FS: 0000000000000000(0000) GS:ffff8a5fcfc40000(0000) knlGS:0000000000000000 [157271.055466] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [157271.055467] CR2: 00007fde430532c8 CR3: 000000079e210004 CR4: 00000000001706e0 [157271.055468] Call Trace: [157271.055471] [157271.055474] arc_write+0x36c/0x870 [zfs] [157271.055529] ? arc_hdr_alloc_abd+0x270/0x270 [zfs] [157271.055583] ? arc_write_children_ready+0x20/0x20 [zfs] [157271.055636] ? arc_hdr_destroy+0x3c0/0x3c0 [zfs] [157271.055690] ? dbuf_unoverride+0x190/0x190 [zfs] [157271.055748] ? dbuf_rele+0x80/0x80 [zfs] [157271.055806] ? dbuf_write+0x46a/0x920 [zfs] [157271.055863] dbuf_write+0x46a/0x920 [zfs] [157271.055921] ? dmu_buf_unlock_parent+0xc0/0xc0 [zfs] [157271.055978] ? dbuf_unoverride+0x190/0x190 [zfs] [157271.056035] ? dbuf_rele+0x80/0x80 [zfs] [157271.056092] ? arc_cksum_free+0x85/0xf0 [zfs] [157271.056146] dbuf_sync_leaf+0x20a/0xd60 [zfs] [157271.056211] ? dbuf_clear_data+0xc0/0xc0 [zfs] [157271.056269] dbuf_sync_list+0xc5/0x160 [zfs] [157271.056327] dbuf_sync_indirect+0x1bc/0x5b0 [zfs] [157271.056385] dbuf_sync_list+0x50/0x160 [zfs] [157271.056443] dbuf_sync_indirect+0x1bc/0x5b0 [zfs] [157271.056501] dbuf_sync_list+0x50/0x160 [zfs] [157271.056559] dbuf_sync_indirect+0x1bc/0x5b0 [zfs] [157271.056617] dbuf_sync_list+0x50/0x160 [zfs] [157271.056675] dbuf_sync_indirect+0x1bc/0x5b0 [zfs] [157271.056733] dbuf_sync_list+0x50/0x160 [zfs] [157271.056791] dbuf_sync_indirect+0x1bc/0x5b0 [zfs] [157271.056848] ? arc_write+0x36c/0x870 [zfs] [157271.056902] dbuf_sync_list+0x50/0x160 [zfs] [157271.056960] dnode_sync+0x674/0x1a00 [zfs] [157271.057026] ? dmu_objset_sync+0x1e8/0x690 [zfs] [157271.057087] ? dmu_objset_sync+0x1e8/0x690 [zfs] [157271.057149] dmu_objset_sync+0x211/0x690 [zfs] [157271.057241] dsl_dataset_sync+0x9e/0x370 [zfs] [157271.057307] dsl_pool_sync+0x124/0x8f0 [zfs] [157271.057376] spa_sync+0x7c7/0x1890 [zfs] [157271.057451] ? __cond_resched+0x16/0x50 [157271.057455] ? spa_txg_history_init_io+0xe3/0x110 [zfs] [157271.057533] txg_sync_thread+0x38a/0x5e0 [zfs] [157271.057611] ? txg_thread_exit+0xe0/0xe0 [zfs] [157271.057688] thread_generic_wrapper+0x62/0xa0 [spl] [157271.057696] ? __thread_exit+0x20/0x20 [spl] [157271.057702] kthread+0x127/0x150 [157271.057706] ? set_kthread_struct+0x50/0x50 [157271.057708] ret_from_fork+0x22/0x30 [157271.057712] [157271.057712] ---[ end trace ebdbe94305dbda16 ]--- [157271.058169] VERIFY3(zio->io_size == BP_GET_PSIZE(bp)) failed (0 == 16384) [157271.058215] PANIC at zio.c:3507:zio_dva_allocate() [157271.058239] Showing stack for process 259750 [157271.058241] CPU: 6 PID: 259750 Comm: z_wr_iss Tainted: P W O 5.15.67 #1-NixOS [157271.058244] Hardware name: Supermicro X9SCL-II/X9SCM-II/X9SCL-II/X9SCM-II, BIOS 2.3a 01/06/2021 [157271.058246] Call Trace: [157271.058248] [157271.058250] dump_stack_lvl+0x46/0x5e [157271.058258] spl_panic+0xd1/0xe9 [spl] [157271.058271] ? spa_has_pending_synctask+0x1c/0x50 [zfs] [157271.058387] ? vdev_queue_io_to_issue+0x4be/0x1010 [zfs] [157271.058471] ? __cond_resched+0x16/0x50 [157271.058475] ? kmem_cache_alloc+0x3c/0x3c0 [157271.058480] ? spl_kmem_cache_alloc+0xc6/0xc50 [spl] [157271.058488] ? zio_checksum_templates_free+0x80/0x80 [zfs] [157271.058577] zio_dva_allocate+0x87d/0xcf0 [zfs] [157271.058668] ? mutex_lock+0xe/0x30 [157271.058671] ? zfs_refcount_add_many+0xdc/0x150 [zfs] [157271.058747] ? zio_io_to_allocate+0xa2/0x190 [zfs] [157271.058830] zio_execute+0xde/0x2d0 [zfs] [157271.058913] taskq_thread+0x302/0x610 [spl] [157271.058920] ? wake_up_q+0x90/0x90 [157271.058924] ? zio_gang_tree_free+0xb0/0xb0 [zfs] [157271.059007] ? taskq_thread_spawn+0x60/0x60 [spl] [157271.059014] kthread+0x127/0x150 [157271.059018] ? set_kthread_struct+0x50/0x50 [157271.059020] ret_from_fork+0x22/0x30 [157271.059024] ```
Patch ```diff diff --git a/module/zfs/zio.c b/module/zfs/zio.c index 7b55450ca..9a259b785 100644 --- a/module/zfs/zio.c +++ b/module/zfs/zio.c @@ -1125,6 +1125,8 @@ zio_write(zio_t *pio, spa_t *spa, uint64_t txg, blkptr_t *bp, zp->zp_copies > 0 && zp->zp_copies <= spa_max_replication(spa)); + WARN_ON_ONCE(psize == 0); + zio = zio_create(pio, spa, txg, bp, data, lsize, psize, done, private, ZIO_TYPE_WRITE, priority, flags, NULL, 0, zb, ZIO_STAGE_OPEN, (flags & ZIO_FLAG_DDT_CHILD) ? ```
thefloweringash commented 1 year ago

Checking back in the earlier messages, I didn't see anyone share the stuck task report (excluding the zio_execute thread), which might indicate what zfs is doing at the time of the failure and help work backwards to the cause. Here's an example, just in case it's helpful.

[138854.096402] INFO: task zfs:2790157 blocked for more than 122 seconds.
[138854.096419]       Tainted: P        W  O      5.15.74 #1-NixOS
[138854.096434] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[138854.096453] task:zfs             state:D stack:    0 pid:2790157 ppid:2790154 flags:0x00004000
[138854.096455] Call Trace:
[138854.096456]  <TASK>
[138854.096456]  __schedule+0x2e1/0x1350
[138854.096459]  schedule+0x5b/0xd0
[138854.096461]  io_schedule+0x42/0x70
[138854.096463]  cv_wait_common+0xef/0x2a0 [spl]
[138854.096469]  ? finish_wait+0x90/0x90
[138854.096470]  txg_wait_synced_impl+0xf2/0x270 [zfs]
[138854.096580]  txg_wait_synced+0xc/0x40 [zfs]
[138854.096668]  dsl_sync_task_common+0x1b0/0x290 [zfs]
[138854.096738]  ? dmu_recv_cleanup_ds+0x1e0/0x1e0 [zfs]
[138854.096799]  ? dmu_recv_end_sync+0x5d0/0x5d0 [zfs]
[138854.096860]  ? dmu_recv_end_sync+0x5d0/0x5d0 [zfs]
[138854.096920]  ? dmu_recv_cleanup_ds+0x1e0/0x1e0 [zfs]
[138854.096981]  dsl_sync_task+0x16/0x20 [zfs]
[138854.097051]  dmu_recv_existing_end+0x5b/0x90 [zfs]
[138854.097112]  ? tsd_hash_dtor+0x73/0x90 [spl]
[138854.097119]  ? rrw_exit+0x143/0x2b0 [zfs]
[138854.097191]  ? kfree+0xc2/0x250
[138854.097194]  ? __cond_resched+0x16/0x50
[138854.097196]  ? down_read+0xe/0x80
[138854.097198]  ? zvol_find_by_name_hash+0x14b/0x390 [zfs]
[138854.097279]  dmu_recv_end+0x8c/0xc0 [zfs]
[138854.097340]  zfs_ioc_recv_impl.constprop.0+0xdd3/0x1200 [zfs]
[138854.097422]  ? cpumask_next+0x1e/0x30
[138854.097425]  ? select_task_rq_fair+0x131/0x1080
[138854.097429]  zfs_ioc_recv_new+0x2d9/0x370 [zfs]
[138854.097521]  ? nvt_remove_nvpair+0x124/0x270 [znvpair]
[138854.097538]  ? __cond_resched+0x16/0x50
[138854.097540]  ? __kmalloc_node+0x68/0x470
[138854.097543]  ? __kmalloc_node+0x68/0x470
[138854.097545]  ? spl_kmem_alloc_impl+0xed/0x100 [spl]
[138854.097550]  ? nv_mem_zalloc.isra.0+0x2b/0x40 [znvpair]
[138854.097556]  ? nvlist_xalloc.part.0+0x68/0xd0 [znvpair]
[138854.097561]  zfsdev_ioctl_common+0x440/0xab0 [zfs]
[138854.097642]  zfsdev_ioctl+0x53/0xe0 [zfs]
[138854.097721]  __x64_sys_ioctl+0x8a/0xc0
[138854.097723]  do_syscall_64+0x3b/0x90
[138854.097726]  entry_SYSCALL_64_after_hwframe+0x61/0xcb
[138854.097729] RIP: 0033:0x7f2d526dce37
[138854.097731] RSP: 002b:00007fff9fd54b18 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[138854.097732] RAX: ffffffffffffffda RBX: 00007fff9fd58168 RCX: 00007f2d526dce37
[138854.097733] RDX: 00007fff9fd54b40 RSI: 0000000000005a46 RDI: 0000000000000005
[138854.097734] RBP: 00007fff9fd58120 R08: 0000000000000003 R09: 0000000000000000
[138854.097735] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000005a46
[138854.097736] R13: 00007fff9fd54b40 R14: 0000000000005a46 R15: 0000000001eb4510
[138854.097739]  </TASK>
clefru commented 1 year ago

I am hitting that bug as well on a backup server that receives an encrypted raw snapshot.

uname -a: Linux irulan 5.15.83 #1-NixOS SMP Wed Dec 14 10:37:31 UTC 2022 x86_64 GNU/Linux
ZFS version: ZFS: Loaded module v2.1.7-1, ZFS pool version 5000, ZFS filesystem version 5
Dec 22 00:16:28 irulan kernel: VERIFY3(size != 0) failed (0 != 0)
Dec 22 00:16:28 irulan kernel: PANIC at range_tree.c:438:range_tree_remove_impl()
Dec 22 00:16:28 irulan kernel: Showing stack for process 1179
Dec 22 00:16:29 irulan kernel: CPU: 1 PID: 1179 Comm: z_wr_iss Tainted: P           O      5.15.62 #1-NixOS
Dec 22 00:16:29 irulan kernel: Hardware name:  /DQ87PG, BIOS PGQ8710H.86A.0155.2018.1031.1654 10/31/2018
Dec 22 00:16:29 irulan kernel: Call Trace:
Dec 22 00:16:29 irulan kernel:  <TASK>
Dec 22 00:16:29 irulan kernel:  dump_stack_lvl+0x46/0x5e
Dec 22 00:16:29 irulan kernel:  spl_panic+0xd1/0xe9 [spl]
Dec 22 00:16:29 irulan kernel:  ? fletcher_4_avx2_native+0x18/0x80 [zcommon]
Dec 22 00:16:29 irulan kernel:  ? abd_fletcher_4_iter+0x67/0xc0 [zcommon]
Dec 22 00:16:29 irulan kernel:  ? abd_iterate_func.part.0+0xdf/0x1c0 [zfs]
Dec 22 00:16:29 irulan kernel:  ? fletcher_4_incremental_native+0x160/0x160 [zcommon]
Dec 22 00:16:29 irulan kernel:  ? pn_free+0x30/0x30 [zfs]
Dec 22 00:16:29 irulan kernel:  ? zfs_btree_find_parent_idx+0x72/0xd0 [zfs]
Dec 22 00:16:29 irulan kernel:  ? metaslab_rangesize64_compare+0x40/0x40 [zfs]
Dec 22 00:16:29 irulan kernel:  ? zfs_btree_find+0x175/0x300 [zfs]
Dec 22 00:16:29 irulan kernel:  range_tree_remove_impl+0xa97/0xef0 [zfs]
Dec 22 00:16:29 irulan kernel:  ? metaslab_df_alloc+0xc4/0x5c0 [zfs]
Dec 22 00:16:29 irulan kernel:  ? __cond_resched+0x16/0x50
Dec 22 00:16:29 irulan kernel:  ? recalibrate_cpu_khz+0x10/0x10
Dec 22 00:16:29 irulan kernel:  ? multilist_sublist_insert_tail+0x2b/0x50 [zfs]
Dec 22 00:16:29 irulan kernel:  metaslab_alloc_dva+0xb6e/0x13f0 [zfs]
Dec 22 00:16:29 irulan kernel:  ? __cond_resched+0x16/0x50
Dec 22 00:16:29 irulan kernel:  metaslab_alloc+0xd3/0x280 [zfs]
Dec 22 00:16:29 irulan kernel:  zio_dva_allocate+0xd3/0x900 [zfs]
Dec 22 00:16:29 irulan kernel:  ? finish_task_switch.isra.0+0x7f/0x290
Dec 22 00:16:29 irulan kernel:  ? __schedule+0x2e9/0x1350
Dec 22 00:16:29 irulan kernel:  ? zio_encrypt+0x4eb/0x710 [zfs]
Dec 22 00:16:29 irulan kernel:  zio_execute+0x83/0x120 [zfs]
Dec 22 00:16:29 irulan kernel:  taskq_thread+0x2cf/0x500 [spl]
Dec 22 00:16:29 irulan kernel:  ? wake_up_q+0x90/0x90
Dec 22 00:16:29 irulan kernel:  ? zio_gang_tree_free+0x70/0x70 [zfs]
Dec 22 00:16:29 irulan kernel:  ? taskq_thread_spawn+0x60/0x60 [spl]
Dec 22 00:16:29 irulan kernel:  kthread+0x127/0x150
Dec 22 00:16:29 irulan kernel:  ? set_kthread_struct+0x50/0x50
Dec 22 00:16:29 irulan kernel:  ret_from_fork+0x22/0x30
Dec 22 00:16:29 irulan kernel:  </TASK>
Majiir commented 1 year ago

I hit this issue on an ARM NAS while it is receiving an incremental snapshot (sent with -I) of an encrypted dataset.

$ uname -srvmo
Linux 6.0.15 #1-NixOS SMP Wed Dec 21 16:41:16 UTC 2022 armv7l GNU/Linux
$ zfs --version
zfs-2.1.7-1
zfs-kmod-2.1.7-1
Jan 18 19:28:43  kernel: VERIFY3(size != 0) failed (0 != 0)
Jan 18 19:28:43  kernel: PANIC at range_tree.c:435:range_tree_remove_impl()
Jan 18 19:28:43  kernel: Showing stack for process 398
Jan 18 19:28:43  kernel: CPU: 0 PID: 398 Comm: z_wr_iss Tainted: P           O       6.0.15 #1-NixOS
Jan 18 19:28:43  kernel: Hardware name: Marvell Armada 370/XP (Device Tree)
Jan 18 19:28:43  kernel:  unwind_backtrace from show_stack+0x18/0x1c
Jan 18 19:28:43  kernel:  show_stack from dump_stack_lvl+0x40/0x4c
Jan 18 19:28:43  kernel:  dump_stack_lvl from spl_panic+0xa8/0xbc [spl]
Jan 18 19:28:43  kernel:  spl_panic [spl] from range_tree_remove_impl+0xe1c/0x187c [zfs]
Jan 18 19:28:43  kernel:  range_tree_remove_impl [zfs] from range_tree_remove+0x24/0x2c [zfs]
Jan 18 19:28:43  kernel:  range_tree_remove [zfs] from metaslab_alloc_dva+0x6b4/0x1640 [zfs]
Jan 18 19:28:43  kernel:  metaslab_alloc_dva [zfs] from metaslab_alloc+0xd0/0x2ac [zfs]
Jan 18 19:28:43  kernel:  metaslab_alloc [zfs] from zio_dva_allocate+0xc4/0x8ec [zfs]
Jan 18 19:28:43  kernel:  zio_dva_allocate [zfs] from zio_execute+0x98/0x12c [zfs]
Jan 18 19:28:43  kernel:  zio_execute [zfs] from taskq_thread+0x2b0/0x4b8 [spl]
Jan 18 19:28:43  kernel:  taskq_thread [spl] from kthread+0xd8/0xf4
Jan 18 19:28:43  kernel:  kthread from ret_from_fork+0x14/0x2c
SpComb commented 1 year ago

Is the fix for this issue shipped in any released zfs 2.1.* version? I am unable to find any reference to this issue or the fixing PR in the changelogs :confused: