openzfs / zfs

OpenZFS on Linux and FreeBSD
https://openzfs.github.io/openzfs-docs
Other
10.69k stars 1.76k forks source link

"zpool import" is altering a readonly device #7397

Open patatetom opened 6 years ago

patatetom commented 6 years ago

System information

Distribution Name | Archlinux Linux Kernel | 4.14.32-1-lts Architecture | 64 ZFS Version | 0.7.0-403_g1724eb62d SPL Version | 0.7.0-31_g43983eb

Describe the problem you're observing

FreeBSD boot on /dev/sdb1 FreeBSD swap on /dev/sdb2 FreeBSD ZFS on /dev/sdb3 (eg. pfSense on /dev/sdb)

# blockdev --setro /dev/sdb*
# chmod 444 /dev/sdb*
# dd if=/dev/zero of=/dev/sdb3 status=none
dd: writing to '/dev/sdb3' : Operation not permitted
# sha1sum /dev/sdb3
b0f8d6a60ecadac714aef18414a5ebf4eda0f812  /dev/sdb3
# zpool import -R /test/ -N zroot
# sha1sum /dev/sdb3
e54f4b827af5127d29a298495e152b8d25407b48  /dev/sdb3

(this does not happen if the option -o readonly=on is passed)

it's problematic because it bypasses the rules I've set on my drive and his partitions (think forensic)...

Describe how to reproduce the problem

Include any warning/errors/backtraces from the system logs

patatetom commented 6 years ago

note that the -N option is only specified with the use of the -a option of the import command in the man page...

ahrens commented 6 years ago

I agree that a (non-readonly) import should fail if the device files are not writeable.

patatetom commented 6 years ago

the readonly option could be implicit (added automatically) at import time as soon as the device is read-only...

loli10K commented 5 years ago

Seems to be fixed in latest release:

root@linux:~# uname -r
4.9.0-3-amd64
root@linux:~# zfs version
zfs-0.8.2-1
zfs-kmod-0.8.2-1
root@linux:~# blockdev --getro /dev/sda*
0
0
0
root@linux:~# zpool create -f testpool /dev/sda
root@linux:~# zpool export testpool
root@linux:~# blockdev --setro /dev/sda*
root@linux:~# blockdev --getro /dev/sda*
1
1
1
root@linux:~# zpool import testpool
cannot import 'testpool': permission denied
    Destroy and re-create the pool from
    a backup source.
root@linux:~# blockdev --setrw /dev/sda*
root@linux:~# zpool import testpool
root@linux:~# zpool list
NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
testpool   960M   130K   960M        -         -     0%     0%  1.00x    ONLINE  -
root@linux:~# 
behlendorf commented 5 years ago

@loli10K thanks for verifying this was fixed. Though, it looks like this is another case where we print a somewhat misleading error message.

loli10K commented 5 years ago

The EACCES is coming from vdev_disk_open() -> blkdev_get_by_path():

(gdb) bt
#0  blkdev_get_by_path (path=0xffff8800baa8d8e0 "/dev/sda1", mode=131, holder=0x2401de7) at fs/block_dev.c:1423
#1  0xffffffffc02e7400 in vdev_disk_open (v=0xffff8800369cc000, psize=0xffff8800b8c37d10, max_psize=0xffff8800b8c37d18, ashift=0xffff8800b8c37d20) at /usr/src/zfs/module/zfs/vdev_disk.c:315
#2  0xffffffffc02e4434 in vdev_open (vd=0xffff8800369cc000) at /usr/src/zfs/module/zfs/vdev.c:1675
#3  0xffffffffc02e4d12 in vdev_open_child (arg=0xffff8800369cc000) at /usr/src/zfs/module/zfs/vdev.c:1565
#4  0xffffffffc00dbcf7 in taskq_thread (args=0xffff8800b80110c0) at /usr/src/zfs/module/spl/spl-taskq.c:927
#5  0xffffffff8109d03a in kthread (_create=0xffff8800ba9f7100) at kernel/kthread.c:211
#6  0xffffffff81809d0f in ret_from_fork () at arch/x86/entry/entry_64.S:468
#7  0x0000000000000000 in ?? ()
(gdb) fin
Run till exit from #0  blkdev_get_by_path (path=0xffff8800baa8d8e0 "/dev/sda1", mode=131, holder=0x2401de7) at fs/block_dev.c:1423
0xffffffffc02e7400 in vdev_disk_open (v=0xffff8800369cc000, psize=0xffff8800b8c37d10, max_psize=0xffff8800b8c37d18, ashift=0xffff8800b8c37d20) at /usr/src/zfs/module/zfs/vdev_disk.c:315
315         bdev = vdev_bdev_open(v->vdev_path, mode, zfs_vdev_holder);
Value returned is $5 = (struct block_device *) 0xfffffffffffffff3
(gdb) fin
Run till exit from #0  0xffffffffc02e7400 in vdev_disk_open (v=0xffff8800369cc000, psize=0xffff8800b8c37d10, max_psize=0xffff8800b8c37d18, ashift=0xffff8800b8c37d20) at /usr/src/zfs/module/zfs/vdev_disk.c:315
vdev_open (vd=0xffff8800369cc000) at /usr/src/zfs/module/zfs/vdev.c:1682
1682        if (osize > max_osize) {
Value returned is $6 = 13
(gdb)  

Handling this from userland (ioctl(BLKROGET)?) would require knowing which disk(s) have caused the EACCES. Maybe we could blkdev_get_by_path(FMODE_READ) && bdev_read_only() from vdev_disk_open() and return EROFS accordingly.

behlendorf commented 5 years ago

Maybe we could blkdev_get_by_path(FMODE_READ) && bdev_read_only() from vdev_disk_open() and return EROFS accordingly.

That's what I was going to suggest, since it's really only in vdev_disk_open() where we have enough information to accurately report EROFS. We'd want to update vdev_file_open() as well.

behlendorf commented 3 years ago

This suggested change here still needs to be made.

stale[bot] commented 2 years ago

This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions.