bdrewery / zfstools

Various ZFS scripts. Most notably, zfs-auto-snapshot, a ruby clone of OpenSolaris auto snapshotting
Other
141 stars 28 forks source link

Snapshots are created for non-mounted filesystems, but not deleted #26

Closed caleb closed 9 years ago

caleb commented 9 years ago

I have a filesystem that is not mounted in my pool, and zfstools keeps creating snapshots for that filesystem, but does not clean up any old snapshots for that filesystem since it skips filesystems that are not mounted.

I ended up with 12000 snapshots :)

I don't know if you would like to not make snapshots for unmounted filesystems, or delete expired snapshots for unmounted filesystems.

bdrewery commented 9 years ago

I'm inclined to not snapshot unmounted datasets. I don't want to cause surprise data loss though. What is your use case for having a dataset not mounted?

caleb commented 9 years ago

I can't even remember what was going on when it happened :) I think it had something to do with when I moved my system over to a boot environment based layout with beadm and I had old filesystems that I didn't use anymore that I kept as backups and forgot about.

At any rate, I think you're right, not creating snapshots for unmounted volumes seems like the way to go.

bdrewery commented 9 years ago

Yeah just yesterday I realized I had this case with BE as well but I don't see snapshots being created on them. I do think it makes sense to just skip unmounted though.

ghost commented 9 years ago

Regarding skipping unmounted file systems, I've been intending to use "zfs send" piped into "zfs recv" as one of my backup methods, but I've noticed recently that trying to do a "zfs send" from a snapshot on a pool that contains unmounted file systems to another pool can cause issues with mount points.

My system is still on 9.3, but I set up the zroot in a similar way to how 10.1 does it. Basically it looks like FreeBSD 10.1 sets up it's zroot such that zroot/usr and zroot/var are set "canmount=off"; they seem to exist solely to give the child filesystems a mount point to inherit, among other properties. Therefore, a snapshot of zroot created with zfs-auto-snapshot won't contain a snapshot for zroot/usr and zroot/var, with the result that "zfs recv -d" ends up creating new "usr" and "var" datasets on the destination pool that inherit their mount point from the destination pool. All subsequent child file systems under /usr and /var end up with the wrong mount point.

This problem would likely also exist for other data pools that are used for other ports that don't mount some of their ZFS file systems (poudriere, for example.)

bdrewery commented 9 years ago

This was supposed to be addressed in 3628f79db4fbf9bb79d8bdfbaade7544ba08b22e but must not have worked since these reports came in long after the change.

bdrewery commented 9 years ago

I think what may have happened here is that a recursive snapshot affected an unmounted dataset. The change in 3628f79 needs to consider unmounted children and not used recursive snapshotting in that case.