Open speed47 opened 2 years ago
This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions.
Issue still present in zfs-2.1.12
System information
Describe the problem you're observing
When
zfs_no_scrub_io=1
, the attached reproducer script will corrupt a pool with a few zpool add/remove/attach commands:As long as
zfs_no_scrub_io
is supposed to only impact azpool scrub
, this behavior seems to be unwanted, and might be a side effect of this tunable. This is non-reproducible whenzfs_no_scrub_io=0
.Tested with OpenZFS 2.0.0 and 2.1.99 (latest commit)
Describe how to reproduce the problem
reproducer.sh
``` #! /bin/bash set -e dir=/dev/shm poolname=test948894687234 v12ga=$dir/12ga v1ga=$dir/1ga v1gb=$dir/1gb v1gc=$dir/1gc v1gf=$dir/1gf truncate -s 1024M $v12ga truncate -s 64M $v1ga $v1gb $v1gc $v1gf data() { echo -n "writing data... " set +e ( while :; do echo "/$poolname/$RANDOM$RANDOM$RANDOM" done ) | timeout 1 xargs touch set -e echo "done" } action() { local cmd=$1 shift zpool "$cmd" "$poolname" "$@" echo "zpool $cmd $@" zpool wait $poolname } try() { zpool create -f -o failmode=continue $poolname $v12ga action add special $v1ga $v1gb data action add special $v1gc data action remove $v1gb data action remove $v1gc data action attach $v1ga $v1gf data action scrub if zpool status $poolname | grep -q 'unrecoverable error'; then echo "problem reproduced!" zpool events -v $poolname | grep cksum zpool status $poolname exit 0 else echo "didn't reproduce the problem, trying again..." umount /$poolname zpool destroy $poolname fi } umount /$poolname || true zpool destroy $poolname || true while :; do try; done ```Include any warning/errors/backtraces from the system logs