openzfs / zfs

OpenZFS on Linux and FreeBSD
https://openzfs.github.io/openzfs-docs
Other
10.7k stars 1.76k forks source link

Occasional panics with "blkptr at XXX has invalid YYY" #16626

Open asomers opened 1 month ago

asomers commented 1 month ago

System information

Type Version/Name
Distribution Name FreeBSD
Distribution Version 14.1 and 14.0
Kernel Version 14.1 and 14.0
Architecture amd64
OpenZFS Version zfs-kmod-2.2.4-FreeBSD_g256659204 and zfs-kmod-2.2.0-FreeBSD_g95785196f

Describe the problem you're observing

Our servers occasionally crash due to what looks like in-memory corruption of the block pointer. The crashes are not repeatable. That is, the same server never crashes in exactly the same way twice. So we don't think that on-disk corruption is to blame. But they happen often enough that we can't dismiss them as one-off events. We see these crashes approximately once for every 10 PiB of ZFS writes. ZFS encryption is NOT in use on any of the affected servers and they all have ECC RAM.

We would be very grateful to anybody with any insights into this problem.

Describe how to reproduce the problem

Unfortunately, we don't have any reproduction case. There is no known trigger. The crashing servers have a complicated workload that involves writes coming from userspace processes, from zfs recv, and a little bit from ctld.

Include any warning/errors/backtraces from the system logs

A selection of panic messages:

Solaris(panic): <POOL> blkptr at 0xfffffe15d9de2c00x has invalid TYPE 127
Solaris(panic): blkptr at 0xfffffe0f58099280 has invalid BLK_BIRTH 0
Solaris(panic): <POOL>: blkptr at 0xfffffe144f5f1960x has invalid TYPE 209
Solaris(panic): <POOL>: blkptr at 0xfffffe159796f080x has invalid COMPRESS 72
Solaris(panic): <POOL>: blkptr at 0xfffffe1d1048a480x has invalid TYPE 211
Solaris(panic): <POOL>: blkptr at 0xfffffe1463003700x has invalid COMPRESS 115
Solaris(panic): <POOL>: blkptr at 0xfffffe16a19d6980x has invalid TYPE 66
Solaris(panic): <POOL>: blkptr at 0xfffffe1895d19480x has invalid TYPE 212
Solaris(panic): <POOL>: blkptr at 0xfffffe21b502f4a0x DVA 0 has invalid VDEV 8323199

And a representative stack trace

[5269025] panic: Solaris(panic): <POOL>: blkptr at 0xfffffe21b502f4a0x DVA 0 has invalid VDEV 8323199
[5269025] cpuid = 11
[5269025] time = 1727012534
[5269025] KDB: stack backtrace:
[5269025] db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe0ddfed7780
[5269025] vpanic() at vpanic+0x132/frame 0xfffffe0ddfed78b0
[5269025] panic() at panic+0x43/frame 0xfffffe0ddfed7910
[5269025] vcmn_err() at vcmn_err+0xeb/frame 0xfffffe0ddfed7a40
[5269025] zfs_panic_recover() at zfs_panic_recover+0x59/frame 0xfffffe0ddfed7aa0
[5269025] zfs_blkptr_verify_log() at zfs_blkptr_verify_log+0x130/frame 0xfffffe0ddfed7c30
[5269025] zfs_blkptr_verify() at zfs_blkptr_verify+0x261/frame 0xfffffe0ddfed7c90
[5269025] zio_free() at zio_free+0x25/frame 0xfffffe0ddfed7cc0
[5269025] dsl_dataset_block_kill() at dsl_dataset_block_kill+0x19d/frame 0xfffffe0ddfed7d20
[5269025] dbuf_write_done() at dbuf_write_done+0x4f/frame 0xfffffe0ddfed7d60
[5269025] arc_write_done() at arc_write_done+0x378/frame 0xfffffe0ddfed7da0
[5269025] zio_done() at zio_done+0xc6e/frame 0xfffffe0ddfed7e10
[5269025] zio_execute() at zio_execute+0x3c/frame 0xfffffe0ddfed7e40
[5269025] taskqueue_run_locked() at taskqueue_run_locked+0x182/frame 0xfffffe0ddfed7ec0
[5269025] taskqueue_thread_loop() at taskqueue_thread_loop+0xc2/frame 0xfffffe0ddfed7ef0
[5269025] fork_exit() at fork_exit+0x7f/frame 0xfffffe0ddfed7f30
[5269025] fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe0ddfed7f30
[5269025] --- trap 0xc, rip = 0x826b7fd1a, rsp = 0x857ce3a88, rbp = 0x857ce3aa0 ---
[5269025] KDB: enter: panic
snajpa commented 3 days ago

I saw a very similar stack on Linux with 6.10 when I tried docker pull of n*more than what could in RAM with zfs docker driver. It went away after upgrading to 6.11, but this it was exactly arc_write_done, the same path. It looked like some kind of race with dbuf_evict - https://github.com/vpsfreecz/zfs/pull/1#issuecomment-2458598745

snajpa commented 3 days ago

looks like we'll need to hunt this one down :D it might be responsible for some of the unexplained crashes on nodes where it's impractical to take a full memory dump