Open anaximeno opened 2 years ago
It would be nice to add ZFS support.
this looks like a duplicate of #25
There was a similar issue that once was opened on the original repo (not this fork):
https://github.com/teejee2008/timeshift/issues/529
...that was closed by teejee(timeshift creator), with this message: "ZFS has a weird disk layout that is not supported."
Maximum respect for Tony George, we all have been using his creation for years straight. Lets just take our time to contribute as we can.
The little I know about ZFS it really differs from filesystems that timeshift has been supporting very well so far.
ZFS is a rock solid stable 20+ years old project, from SUN/ORACLE, but as an option on desktop Linux distros where timeshift is popular is still very recent and because of that, it will probably take some time to see timeshift supporting that new scenario.
I really hope that timeshift, be it the original teejee's or this mint's fork, could support ZFS somehow.
-1 to this unless zfs support gets included in main line kernel.
@leigh123linux Ubuntu kernel already support ZFS, and the installer also can install Linux Mint or Ubuntu onto ZFS. Why do mainline kernel matter here?
Ubuntu kernel already support ZFS, and the installer also can install Linux Mint or Ubuntu onto ZFS. Why do mainline kernel matter here?
I second that. I just installed Mint 21 onto ZFS, and it's not there:
Seems like @teejee2008 didn't like notion of the first field in the appropriate /proc/mounts
line in the aforementioned issue:
Not sure what does it look like for the root-on-BTRFS.
The original repo is archived since Oct 16, 2022, it seems. I guess this means that folks interested in this feature can contribute to this repository now.
I would like to see zfs support, too. I can contribute with testing.
I guess, this would be much work. I had a quick look to the source, and saw that this a little bit against / aside the existing logic to guess / detect possible datastores (disks/volumes/filesystems) based on the serving device.
zfs is a whole in one solution for classical tools like lvm (volume management), luks (encryption) and ext/btrfs (filesystems).
My case would be backup a luks encrypted root on lvm to a 3-disk zfs-raidz1(Raid5). At my zfs pool are different volumes and subvolumes created for different purposes. Some of them use encryption, others don't.
Keeping this in mind, a user has to make sure prefered backup-volume is already setup and mounted. This applies to zfs backup-sources and -destinations.
A timeshift implementation has to look at the mountpoints of type zfs (grep zfs /proc/mounts
), instead/in additon to the device detection.
With this behavior a rsync backup can be realized.
for using timeshift for backing up a zfs (volume) as @sskras mentioned, another mechanism needs to be implemented.
zfs has a snapshot function built in. The technical commands and options for zfs snapshot-management can be found in the zfs-autosnap source (https://github.com/rollcat/zfs-autosnap).
# disks and partitions (sdd lvm & sd[efg] for zfs)
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdd 8:48 1 119,2G 0 disk
├─sdd1 8:49 1 234,3M 0 part
└─sdd2 8:50 1 119G 0 part
├─ssdVG-root 253:0 0 40G 0 lvm
│ └─root 253:1 0 40G 0 crypt /
├─ssdVG-boot 253:2 0 1G 0 lvm /boot
sde 8:64 0 7,3T 0 disk
└─sde1 8:65 0 7,2T 0 part
sdf 8:80 0 7,3T 0 disk
└─sdf1 8:81 0 7,2T 0 part
sdg 8:96 0 7,3T 0 disk
└─sdg1 8:97 0 7,2T 0 part
$ zpool status usbpool
pool: usbpool
state: ONLINE
scan: scrub repaired 0B in 14:36:51 with 0 errors on Sun Dec 11 15:00:53 2022
config:
NAME STATE READ WRITE CKSUM
usbpool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
wwn-0x5000c500c6988b98-part1 ONLINE 0 0 0
wwn-0x5000c500d0142180-part1 ONLINE 0 0 0
wwn-0x5000c500cf99cd54-part1 ONLINE 0 0 0
errors: No known data errors
# list volumes of zfs pool "usbpool"
$ zfs list -r usbpool
NAME USED AVAIL REFER MOUNTPOINT
usbpool 7.11T 7.24T 155K /usbpool
usbpool/backup 7.01T 7.24T 341K /usbpool/backup
usbpool/backup/clonezilla 10.0G 7.24T 10.0G /usbpool/backup/clonezilla
usbpool/backup/proxmox 5.82T 7.24T 5.77T /usbpool/backup/proxmox
usbpool/backup/rsnapshot 1.18T 7.24T 1.14T /usbpool/backup/rsnapshot
# list snapshots of volume "backup" in zfs pool "usbpool"
$ zfs list -t snapshot usbpool/backup | head
NAME USED AVAIL REFER MOUNTPOINT
usbpool/backup@zfs-auto-snap_monthly-2022-07-12-0643 0B - 330K -
usbpool/backup@zfs-auto-snap_monthly-2022-08-12-0557 0B - 330K -
usbpool/backup@zfs-auto-snap_monthly-2022-09-11-0629 0B - 330K -
usbpool/backup@zfs-auto-snap_monthly-2022-10-11-0610 0B - 330K -
usbpool/backup@zfs-auto-snap_weekly-2022-10-21-0554 0B - 330K -
usbpool/backup@autosnap_2022-10-27_11:00:04_monthly 0B - 330K -
usbpool/backup@autosnap_2022-10-27_11:00:04_daily 0B - 330K -
usbpool/backup@autosnap_2022-10-28_00:00:04_daily 0B - 330K -
usbpool/backup@zfs-auto-snap_weekly-2022-10-28-0554 0B - 330K -
# list snapshots of sub-volume "backup/rsnapshot" in zfs pool "usbpool"
$ zfs list -t snapshot usbpool/backup/rsnapshot | head
NAME USED AVAIL REFER MOUNTPOINT
usbpool/backup/rsnapshot@zfs-auto-snap_monthly-2022-07-12-0643 5.34G - 1.14T -
usbpool/backup/rsnapshot@zfs-auto-snap_monthly-2022-08-12-0557 4.91G - 1.14T -
usbpool/backup/rsnapshot@zfs-auto-snap_monthly-2022-09-11-0629 5.40G - 1.14T -
usbpool/backup/rsnapshot@zfs-auto-snap_monthly-2022-10-11-0610 2.90G - 1.14T -
usbpool/backup/rsnapshot@zfs-auto-snap_weekly-2022-10-21-0554 68.7M - 1.14T -
usbpool/backup/rsnapshot@autosnap_2022-10-27_11:00:03_monthly 0B - 1.14T -
usbpool/backup/rsnapshot@autosnap_2022-10-27_11:00:03_daily 0B - 1.14T -
usbpool/backup/rsnapshot@autosnap_2022-10-28_00:00:03_daily 0B - 1.14T -
usbpool/backup/rsnapshot@zfs-auto-snap_weekly-2022-10-28-0554 852K - 1.14T -
# listing/counting snapshots can be done recursive
$ zfs list -t snapshot usbpool/backup | wc -l
174
$ zfs list -r -t snapshot usbpool/backup | wc -l
822
$ zfs list -r -t snapshot usbpool | wc -l
1558
# mounts can be listed via /proc/mounts
$ grep zfs /proc/mounts
usbpool /usbpool zfs rw,xattr,noacl 0 0
usbpool/backup /usbpool/backup zfs rw,xattr,noacl 0 0
usbpool/backup/rsnapshot /usbpool/backup/rsnapshot zfs rw,xattr,posixacl 0 0
usbpool/backup/proxmox /usbpool/backup/proxmox zfs rw,xattr,noacl 0 0
usbpool/backup/clonezilla /usbpool/backup/clonezilla zfs rw,xattr,noacl 0 0
@leigh123linux Ubuntu kernel already support ZFS, and the installer also can install Linux Mint or Ubuntu onto ZFS. Why do mainline kernel matter here?
Because ubuntu drops support for it's stupid ideas.
I also wouldn't recommend installing Linux Mint with ZFS, BTRFS is a better option for a lot of reasons.
This YouTube video might be a good reference on the pros and cons of some of the most common Linux filesystems: https://youtu.be/G785-kxFH_M
@leigh123linux commented 3 days ago:
Because ubuntu drops support for it's stupid ideas.
Can we stay only on facts and avoid being subjective? Eg:
Because ubuntu drops support for ZFS.
Thanks in advance:)
@anaximeno commented 3 days ago:
I also wouldn't recommend installing Linux Mint with ZFS, BTRFS is a better option for a lot of reasons.
These reasons doesn't seem to include ability to mount a degraded RAID-1: https://www.reddit.com/r/btrfs/comments/vf2u17/comment/icuf4ao/
These reasons doesn't seem to include ability to mount a degraded RAID-1: https://www.reddit.com/r/btrfs/comments/vf2u17/comment/icuf4ao/
It didn't include it, so thanks; now it does.
Let me add one thing I forgot to say in the comment about using BTRFS over ZFS.
BTRFS seems to be a better option for personal computers than ZFS, but this is arguable when talking about servers. The way ZFS uses RAM during its operations could be seen as good or bad depending on the conditions of your computer. From what I noticed it is not the best option when having a lower RAM capacity. BTRFS is far from perfect (there are also problems with RAID-5 and RAID-6 if am remembering it right) but it does have good support from the kernel mainline and it's becoming better and faster in each new release.
Well, this is my subjective opinion based on objective facts, I don't believe there's only one way of doing things, so people should use whatever they find is the best option for them.
It didn't include it, so thanks; now it does.
To me this means that some of your pros are cons for other people (in the context of using BTRFS). So it looks kind of subjective.
From what I noticed it is not the best option when having a lower RAM capacity.
I haven't been running machine with 512 megs of RAM + ZFS, but some folks were:
@vermaden tweeted on 1:06 AM · Apr 13, 2023:
I used 2 x 2TB ZFS mirror for YEARS with 512 MB of RAM on #FreeBSD and everything went smooth - my uptime was as long as I wanted to - usually ended by need for upgrade.
Just limit ARC size to 128 MB and nothing more needed.
My current 5TB of data backup boxes have 2 GB RAM.
So it's a matter of configuration effort and a performance trade-off, IMO.
It didn't include it, so thanks; now it does.
To me this means that some of your pros are cons for other people (in the context of using BTRFS). So it looks kind of subjective.
No. I do see that as a problem BTRFS has, and as such, I do include it when weighing between BTRFS and ZFS.
From what I noticed it is not the best option when having a lower RAM capacity.
I haven't been running machine with 512 megs of RAM + ZFS, but some folks were:
@vermaden tweeted on 1:06 AM · Apr 13, 2023:
I used 2 x 2TB ZFS mirror for YEARS with 512 MB of RAM on #FreeBSD and everything went smooth - my uptime was as long as I wanted to - usually ended by need for upgrade.
Just limit ARC size to 128 MB and nothing more
My current 5TB of data backup boxes have 2 GB RAM.
So it's a matter of configuration effort and a performance trade-off, IMO.
That's nice, I didn't know you can configure it that way.
I still do believe that support for ZFS should be added to Timeshift.
Since I installed Ubuntu with ZFS on my work laptop and I want to be able to use Timeshift, I would appreciate the effort.
I would love to see ZFS support here - then a FreeBSD port would be lovely :)
Regards, vermaden
Also the values (on FreeBSD) to set minimum and maximum ARC size are:
# grep arc /etc/sysctl.conf
vfs.zfs.arc_max: 536870912
vfs.zfs.arc_min: 134217728
The 'dots' syntax is also OK:
# grep arc /etc/sysctl.conf
vfs.zfs.arc.max: 536870912
vfs.zfs.arc.min: 134217728
I believe something similar (or identical) is available on Linux.
The above settings are for 128 MB minimum ans 512 MB maximum.
Usually ARC is still at maximum of that value - stat from several days of uptime from my laptop from htop(1) tool below.
ARC: 512M Used:241M MFU:75.3M MRU:27.3M Anon:9.73M Hdr:2.02M Oth:126M
But its for about 4TB of ZFS data and I have 32 GB RAM here so :)
If you want to really limit ARC then set minimum to 64MB and maximum to 128MB.
Regards, vermaden
Given the installer supports zfs (and not btrfs), seems a no-brainer to me that both Timeshift and the Disks utility should support zfs (neither currently does, cannot format a partition zfs with the Disks utility). So as I see it currently:
I'd see a strong case for one or both of:
I'd prefer 1 or both personally, but hey, that'll be subjective and diverse views on priorities will exist. But btrfs is used by my routers and supported by Timeshift and Disks and so I'm more familiar with it than zfs and possibly it's a slightly leaner, simpler filesystem (can't comment confidently on that, inferred only by observation of its use on routers I have which are typically resource focussed and limited).
@leigh123linux Ubuntu kernel already support ZFS, and the installer also can install Linux Mint or Ubuntu onto ZFS. Why do mainline kernel matter here?
Because ubuntu drops support for it's stupid ideas.
ZFS support is not dropped.
https://bugs.launchpad.net/ubuntu-desktop-installer/+bug/2000083
https://www.phoronix.com/news/Ubuntu-23.10-ZFS-Install
+1 for ZFS support in Timeshift :)
I would like to see zfs support, too. I can contribute with testing.
I guess, this would be much work. I had a quick look to the source, and saw that this a little bit against / aside the existing logic to guess / detect possible datastores (disks/volumes/filesystems) based on the serving device.
zfs is a whole in one solution for classical tools like lvm (volume management), luks (encryption) and ext/btrfs (filesystems).
backup to a zfs (sub)volume
My case would be backup a luks encrypted root on lvm to a 3-disk zfs-raidz1(Raid5). At my zfs pool are different volumes and subvolumes created for different purposes. Some of them use encryption, others don't.
Keeping this in mind, a user has to make sure prefered backup-volume is already setup and mounted. This applies to zfs backup-sources and -destinations.
A timeshift implementation has to look at the mountpoints of type zfs (
grep zfs /proc/mounts
), instead/in additon to the device detection.With this behavior a rsync backup can be realized.
backup/snapshot a zfs (sub)volume
for using timeshift for backing up a zfs (volume) as @sskras mentioned, another mechanism needs to be implemented.
zfs has a snapshot function built in. The technical commands and options for zfs snapshot-management can be found in the zfs-autosnap source (https://github.com/rollcat/zfs-autosnap).
some example output from my system:
# disks and partitions (sdd lvm & sd[efg] for zfs) $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdd 8:48 1 119,2G 0 disk ├─sdd1 8:49 1 234,3M 0 part └─sdd2 8:50 1 119G 0 part ├─ssdVG-root 253:0 0 40G 0 lvm │ └─root 253:1 0 40G 0 crypt / ├─ssdVG-boot 253:2 0 1G 0 lvm /boot sde 8:64 0 7,3T 0 disk └─sde1 8:65 0 7,2T 0 part sdf 8:80 0 7,3T 0 disk └─sdf1 8:81 0 7,2T 0 part sdg 8:96 0 7,3T 0 disk └─sdg1 8:97 0 7,2T 0 part $ zpool status usbpool pool: usbpool state: ONLINE scan: scrub repaired 0B in 14:36:51 with 0 errors on Sun Dec 11 15:00:53 2022 config: NAME STATE READ WRITE CKSUM usbpool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 wwn-0x5000c500c6988b98-part1 ONLINE 0 0 0 wwn-0x5000c500d0142180-part1 ONLINE 0 0 0 wwn-0x5000c500cf99cd54-part1 ONLINE 0 0 0 errors: No known data errors # list volumes of zfs pool "usbpool" $ zfs list -r usbpool NAME USED AVAIL REFER MOUNTPOINT usbpool 7.11T 7.24T 155K /usbpool usbpool/backup 7.01T 7.24T 341K /usbpool/backup usbpool/backup/clonezilla 10.0G 7.24T 10.0G /usbpool/backup/clonezilla usbpool/backup/proxmox 5.82T 7.24T 5.77T /usbpool/backup/proxmox usbpool/backup/rsnapshot 1.18T 7.24T 1.14T /usbpool/backup/rsnapshot # list snapshots of volume "backup" in zfs pool "usbpool" $ zfs list -t snapshot usbpool/backup | head NAME USED AVAIL REFER MOUNTPOINT usbpool/backup@zfs-auto-snap_monthly-2022-07-12-0643 0B - 330K - usbpool/backup@zfs-auto-snap_monthly-2022-08-12-0557 0B - 330K - usbpool/backup@zfs-auto-snap_monthly-2022-09-11-0629 0B - 330K - usbpool/backup@zfs-auto-snap_monthly-2022-10-11-0610 0B - 330K - usbpool/backup@zfs-auto-snap_weekly-2022-10-21-0554 0B - 330K - usbpool/backup@autosnap_2022-10-27_11:00:04_monthly 0B - 330K - usbpool/backup@autosnap_2022-10-27_11:00:04_daily 0B - 330K - usbpool/backup@autosnap_2022-10-28_00:00:04_daily 0B - 330K - usbpool/backup@zfs-auto-snap_weekly-2022-10-28-0554 0B - 330K - # list snapshots of sub-volume "backup/rsnapshot" in zfs pool "usbpool" $ zfs list -t snapshot usbpool/backup/rsnapshot | head NAME USED AVAIL REFER MOUNTPOINT usbpool/backup/rsnapshot@zfs-auto-snap_monthly-2022-07-12-0643 5.34G - 1.14T - usbpool/backup/rsnapshot@zfs-auto-snap_monthly-2022-08-12-0557 4.91G - 1.14T - usbpool/backup/rsnapshot@zfs-auto-snap_monthly-2022-09-11-0629 5.40G - 1.14T - usbpool/backup/rsnapshot@zfs-auto-snap_monthly-2022-10-11-0610 2.90G - 1.14T - usbpool/backup/rsnapshot@zfs-auto-snap_weekly-2022-10-21-0554 68.7M - 1.14T - usbpool/backup/rsnapshot@autosnap_2022-10-27_11:00:03_monthly 0B - 1.14T - usbpool/backup/rsnapshot@autosnap_2022-10-27_11:00:03_daily 0B - 1.14T - usbpool/backup/rsnapshot@autosnap_2022-10-28_00:00:03_daily 0B - 1.14T - usbpool/backup/rsnapshot@zfs-auto-snap_weekly-2022-10-28-0554 852K - 1.14T - # listing/counting snapshots can be done recursive $ zfs list -t snapshot usbpool/backup | wc -l 174 $ zfs list -r -t snapshot usbpool/backup | wc -l 822 $ zfs list -r -t snapshot usbpool | wc -l 1558 # mounts can be listed via /proc/mounts $ grep zfs /proc/mounts usbpool /usbpool zfs rw,xattr,noacl 0 0 usbpool/backup /usbpool/backup zfs rw,xattr,noacl 0 0 usbpool/backup/rsnapshot /usbpool/backup/rsnapshot zfs rw,xattr,posixacl 0 0 usbpool/backup/proxmox /usbpool/backup/proxmox zfs rw,xattr,noacl 0 0 usbpool/backup/clonezilla /usbpool/backup/clonezilla zfs rw,xattr,noacl 0 0
Timeshift could simply create and manage its own ZFS snapshots and disregard the others. If you want your own snapshots outside of Timeshift, go ahead. Nothing would prevent that.
I would love to see support for ZFS :/
Hi,
Any update for support ZFS snapshot ?
Best Regards
No one is assigned to work on this issue :)
like to have support for ZFS
Timeshift ZFS support should probably happen, it's gaining popularity, and starting to become an option or default for some distros. ZFS is considered stable and has been battle tested on servers for a long time, while BTRFS has only lost support from large projects and having lost my data 5 times to BTRFS corruption, I refuse to touch it again.
Ok. I'm going to jump in and attempt this. I'm an old programmer, but I've never done anything like this before. There will be a huge learning curve ahead. But, being retired, I do have the time.
I was concerned that Ubuntu was going to drop ZFS support in its installer. But, the latest LTR 24.04 seems to have it.
The biggest obstacle for me is that Timeshift is written in a programming language (Vala) that I would prefer not to learn. Rather, I'd like to use Python. So, what about that? Is there a way for me to use Python without making a complete mess of the project? Should I start over with a new project?
@wallart1
I may have some potential good news for You.
This is the 'Time Slider' from OpenIndiana/OpenSolaris/Illumos implemented in Caja file manager from Mate.
I believe someone already ported it to Linux here:
... and its in Python.
Hope that helps.
Oooo. That's extremely interesting. Thank you so much for that.
Here is another upvote for zfs support for Timeshift. I was looking looking at zfs snapshot managers for my fresh Ubuntu install and they are all little more than shell scrypts or front ends for zfs commands. Fine for servers, but clunky for simple PC home use. A nice visual took like Timeshift is sorely needed.
Ok. I'm going to jump in and attempt this. I'm an old programmer, but I've never done anything like this before. There will be a huge learning curve ahead. But, being retired, I do have the time.
I was concerned that Ubuntu was going to drop ZFS support in its installer. But, the latest LTR 24.04 seems to have it.
The biggest obstacle for me is that Timeshift is written in a programming language (Vala) that I would prefer not to learn. Rather, I'd like to use Python. So, what about that? Is there a way for me to use Python without making a complete mess of the project? Should I start over with a new project?
I've been doing some research, on and off. I ran across a project by the Canonical staff which is called Zsys. This is exactly the type of backend that I think my project needs. Here is an excellent write-up by the developer.
There are some unfortunate aspects to this project, however: 1) The project is still in "experimental" status. It is at the stage where some complex issues are being discussed. 2) The project is on hold because of conflicting priorities at Canonical. 3) It is written in Golang, so I'd have trouble contributing to it.
It is available to install from the Ubuntu repository (in Software Manager or APT), and the installation is dead simple.
I've also looked at Sanoid and ZFSBootMenu. These seem like overkill, and the latter seems to require that the operating system(s) be (re)installed on top of it -- a showstopper, IMO.
So, I guess that I'm waiting on Canonical to make the next move. :/
@wallart1: Ummm, why would supporting ZFS snapshots need any boot-related tools like Zsys or ZFSBootMenu ? Without having any RnD done I somehow still don't get that part.
@sskras, thanks for your question.
I use Timeshift when doing maintenance on the OS or on apps that are installed on the OS, and also when I am about to do something that has an elevated risk to the OS. It is normally the case that, if something goes wrong, the OS can still be fully booted, and I can use Timeshift to revert the changes. But, it can also be the case that I did something that makes the OS unbootable. So, I want to be able to roll back changes before boot time.
I realize that Timeshift doesn't have this feature. This is an easy opportunity to make this project better than that.
If you have read the write-up that I pointed out in a previous post, you will realize that "taking a snapshot" from inside an Ubuntu ZFS system is more complex than it first seems. The author/developer had to invent the concept of a "system state" that represents all of the many datasets, snapshots and attributes that exist when the system state is saved. So, simply rolling back a single snapshot after you've messed something up is not sufficient.
Now, I happen to have a server that runs a hypervisor (Proxmox in my case) which makes saving and reverting the state of a virtual machine a very simple matter. It only has one or two virtual disks to worry about. But, my daily driver system does not have a hypervisor. This is where having a pre-boot rollback capability is critical, in my view.
Of course, the other use for snapshots is to save the state of user data. I don't think that Timeshift handles this very well. It looks like an afterthought to me. Zsys handles this out of the box without giving it a second thought. (However, this is an area where a major discussion exists. See, for example.)
If I tried to rewrite Zsys (because the existing one is written in a language that I don't have expertise in), then the scope of the project would go far beyond anything I am willing to tackle.
I suppose I could begin with just the capability to manage snapshots of user data. But that is not really the intent of the project, as I see it. There are other backup programs that do a better job of this than I could.
At the same time, I am interested in hearing from people what features they want. So, again, thanks for your question. :-)
P.S.: And, as always, remember that snapshots are not substitutes for backups!
No support for ZFS is clearly a bug!
I used the ZFS with encrypted disk Mint installer options, when I had to reinstall Mint 21.*
, after Timeshift restore failed to fix a bad boot, for an install with encrypted LUKS, LVM2 and ext4 options. This prior caused Nvidia driver no desktop, puzzling desktop freezing issues, and disk IO stalls! All fixed with ZFS, and the (default) LZ4 disk compression freed up an impressive amount of precious system disk space, and the machine fells faster, so superior for desktop too.
This inadequate support may become a critical upgrade barrier in the future too, because mintupgrade
for Mint 22
required a fresh Timeshift snapshot, as the first step. So, just how can I do that if I don't have any supported target locations!? Maybe I should raise that with the mintupgrade
authors too.
I tried BTRFS on some SSD drives on another machine, and saw performance issues, so switched the drives to ZFS, and they because noticeably faster than BTRFS or ext4. Some of this maybe why the Mint installer hasn't offered BTRFS it as an auto setup option, so I don't get the inconsistency of Timeshift (maybe properly) supporting BTRFS, but not ZFS.
Timeshift inefficient approach of creating a new directory and rsync (copy, not update) in to it, for every create snapshot, only makes sense for inferior journalling/FAT filesystem locations, which should really be a last resort as a backup location. That's even worse than using tar archives, and why snapshot deletion is so damned time consuming; and anything git-like (diff-based), including something like Git Large File Storage (LFS) or smart use of true filesystem snapshots looks better still.
My ideas are:
Meanwhile, I'm taking my time learning the ins and outs of zfs snapshots. Currently studying a tool called znapzend to automate snapshots and backups.
It would be worth your time to learn as much as possible about zfs and do your own snapshots from the command line. The commands are really not difficult. But you do need to read up on them.
Meanwhile, I'm taking my time learning the ins and outs of zfs snapshots. Currently studying a tool called znapzend to automate snapshots and backups.
It would be worth your time to learn as much as possible about zfs and do your own snapshots from the command line. The commands are really not difficult. But you do need to read up on them.
That's fine if for only for one dataset, but the ZFS rpool (/root) generated by the Mint install contains multiple datasets, so that may not be so easy. You probably don't want to make a recursive snapshot of the rpool dataset for this, because that'll catch up irrelevant stuff like logs, etc. The whole point of a dedicated system snapshot tool is to make this easy and automated for any level user. ZFS is a lot easier that BTRFS to create snapshots for, because of the latter's ridiculous need for extra setup to make snapshots, which suggests tool immaturity.
The command to list the mount points for zfs is sudo zfs get mountpoint
for those writing tools to access them, e.g. Timeshift.
e.g.
sudo zfs get mountpoint
NAME PROPERTY VALUE SOURCE
bpool mountpoint /boot local
bpool/BOOT mountpoint none local
bpool/BOOT/ubuntu_8juuya mountpoint /boot local
rpool mountpoint / local
rpool/ROOT mountpoint none local
rpool/ROOT/ubuntu_8juuya mountpoint / local
rpool/ROOT/ubuntu_8juuya/srv mountpoint /srv inherited from rpool/ROOT/ubuntu_8juuya
rpool/ROOT/ubuntu_8juuya/usr mountpoint /usr inherited from rpool/ROOT/ubuntu_8juuya
rpool/ROOT/ubuntu_8juuya/usr/local mountpoint /usr/local inherited from rpool/ROOT/ubuntu_8juuya
rpool/ROOT/ubuntu_8juuya/var mountpoint /var inherited from rpool/ROOT/ubuntu_8juuya
rpool/ROOT/ubuntu_8juuya/var/games mountpoint /var/games inherited from rpool/ROOT/ubuntu_8juuya
rpool/ROOT/ubuntu_8juuya/var/lib mountpoint /var/lib inherited from rpool/ROOT/ubuntu_8juuya
rpool/ROOT/ubuntu_8juuya/var/lib/AccountsService mountpoint /var/lib/AccountsService inherited from rpool/ROOT/ubuntu_8juuya
rpool/ROOT/ubuntu_8juuya/var/lib/NetworkManager mountpoint /var/lib/NetworkManager inherited from rpool/ROOT/ubuntu_8juuya
rpool/ROOT/ubuntu_8juuya/var/lib/apt mountpoint /var/lib/apt inherited from rpool/ROOT/ubuntu_8juuya
rpool/ROOT/ubuntu_8juuya/var/lib/dpkg mountpoint /var/lib/dpkg inherited from rpool/ROOT/ubuntu_8juuya
rpool/ROOT/ubuntu_8juuya/var/log mountpoint /var/log inherited from rpool/ROOT/ubuntu_8juuya
rpool/ROOT/ubuntu_8juuya/var/mail mountpoint /var/mail inherited from rpool/ROOT/ubuntu_8juuya
rpool/ROOT/ubuntu_8juuya/var/snap mountpoint /var/snap inherited from rpool/ROOT/ubuntu_8juuya
rpool/ROOT/ubuntu_8juuya/var/spool mountpoint /var/spool inherited from rpool/ROOT/ubuntu_8juuya
rpool/ROOT/ubuntu_8juuya/var/www mountpoint /var/www inherited from rpool/ROOT/ubuntu_8juuya
rpool/USERDATA mountpoint / local
rpool/USERDATA/richard_yss56r mountpoint /home/richard local
rpool/USERDATA/root_yss56r mountpoint /root local
rpool/keystore mountpoint - -
bpool
is the boot ZFS pool.
rpool
is the root ZFS pool
So, the relevant columns here are:
There maybe an API to provide all of this too.
See https://dan.langille.org/2019/04/22/mount-your-zfs-datasets-anywhere-you-want/ for examples of using sudo zfs get mountpoint
.
You can also do sudo zfs set mountpoint={mountpoint-value} {root-or-dataset}
to create or change a mount point but Timeshift probably won't need this
e.g. I did this for my data SSD drive ZFS pool datasets to mount them in a familar place:
sudo zfs set mountpoint=/mnt/data-ssd dssd-pool sudo zfs set mountpoint=/mnt/data-ssd2 dssd-pool2
@rwperrott commented 44 minutes ago:
The command to list the mount points for zfs is
sudo zfs get mountpoint
for those writing tools to access them, e.g. Timeshift. e.g.
The result may include zvols (instead of datasets) which operate on block level. I doubt there is a need to backup/restore them blindly.
The result may also contain snapshots and their bookmarks. No point in backing up them.
So I propose to -t filesystem
switch for the get
subcommand: zfs get -t filesystem mountpoint
.
But even with that the command lists a lot of mountpoints where value equals to legacy
on my installation of Linux Mint 21:
I am not sure where exactly did they come from. After filtering them out I get this:
saukrs@s2-book:~$ zfs get -t filesystem mountpoint | awk '$3 !~ /legacy/' | column -t
NAME PROPERTY VALUE SOURCE
bpool mountpoint /boot local
bpool/BOOT mountpoint none local
bpool/BOOT/ubuntu_ijuc69 mountpoint /boot local
rpool mountpoint none local
rpool/ROOT mountpoint none received
rpool/ROOT/ubuntu_ijuc69 mountpoint / received
rpool/ROOT/ubuntu_ijuc69/srv mountpoint /srv inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/usr mountpoint /usr inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/usr/local mountpoint /usr/local inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var mountpoint /var inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var/games mountpoint /var/games inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var/lib mountpoint /var/lib inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var/lib/AccountsService mountpoint /var/lib/AccountsService inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var/lib/NetworkManager mountpoint /var/lib/NetworkManager inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var/lib/apt mountpoint /var/lib/apt inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var/lib/dpkg mountpoint /var/lib/dpkg inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var/log mountpoint /var/log inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var/mail mountpoint /var/mail inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var/snap mountpoint /var/snap inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var/spool mountpoint /var/spool inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var/www mountpoint /var/www inherited from rpool/ROOT/ubuntu_ijuc69
rpool/USERDATA mountpoint / received
rpool/USERDATA/root_jootn5 mountpoint /root received
rpool/USERDATA/s2_jootn5 mountpoint /home/s2 received
Now I still see some unused dataset entries that carries value none
. Let's filter them out:
saukrs@s2-book:~$ zfs get -t filesystem mountpoint | awk '$3 !~ /legacy/ && $3 !~ /none/' | column -t
NAME PROPERTY VALUE SOURCE
bpool mountpoint /boot local
bpool/BOOT/ubuntu_ijuc69 mountpoint /boot local
rpool/ROOT/ubuntu_ijuc69 mountpoint / received
rpool/ROOT/ubuntu_ijuc69/srv mountpoint /srv inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/usr mountpoint /usr inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/usr/local mountpoint /usr/local inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var mountpoint /var inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var/games mountpoint /var/games inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var/lib mountpoint /var/lib inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var/lib/AccountsService mountpoint /var/lib/AccountsService inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var/lib/NetworkManager mountpoint /var/lib/NetworkManager inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var/lib/apt mountpoint /var/lib/apt inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var/lib/dpkg mountpoint /var/lib/dpkg inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var/log mountpoint /var/log inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var/mail mountpoint /var/mail inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var/snap mountpoint /var/snap inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var/spool mountpoint /var/spool inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var/www mountpoint /var/www inherited from rpool/ROOT/ubuntu_ijuc69
rpool/USERDATA mountpoint / received
rpool/USERDATA/root_jootn5 mountpoint /root received
rpool/USERDATA/s2_jootn5 mountpoint /home/s2 received
Now this looks better. But still some transparent, non-functioning datasets remain:
NAME PROPERTY VALUE SOURCE
bpool mountpoint /boot local
rpool/ROOT/ubuntu_ijuc69/usr mountpoint /usr inherited from rpool/ROOT/ubuntu_ijuc69
rpool/ROOT/ubuntu_ijuc69/var mountpoint /var inherited from rpool/ROOT/ubuntu_ijuc69
rpool/USERDATA mountpoint / received
Maybe they come from Linux Mint design of the ZFS layout, or maybe from my migration to a smaller disk. I am unsure.
Anyways, I suggest using just zfs mount
instead zfs get mountpoint
:
saukrs@s2-book:~$ zfs mount
rpool/ROOT/ubuntu_ijuc69 /
rpool/USERDATA/root_jootn5 /root
rpool/ROOT/ubuntu_ijuc69/srv /srv
rpool/ROOT/ubuntu_ijuc69/usr/local /usr/local
rpool/ROOT/ubuntu_ijuc69/var/spool /var/spool
rpool/ROOT/ubuntu_ijuc69/var/mail /var/mail
rpool/ROOT/ubuntu_ijuc69/var/games /var/games
rpool/USERDATA/s2_jootn5 /home/s2
rpool/ROOT/ubuntu_ijuc69/var/lib /var/lib
rpool/ROOT/ubuntu_ijuc69/var/www /var/www
rpool/ROOT/ubuntu_ijuc69/var/snap /var/snap
rpool/ROOT/ubuntu_ijuc69/var/log /var/log
rpool/ROOT/ubuntu_ijuc69/var/lib/apt /var/lib/apt
rpool/ROOT/ubuntu_ijuc69/var/lib/NetworkManager /var/lib/NetworkManager
rpool/ROOT/ubuntu_ijuc69/var/lib/dpkg /var/lib/dpkg
rpool/ROOT/ubuntu_ijuc69/var/lib/AccountsService /var/lib/AccountsService
bpool/BOOT/ubuntu_ijuc69 /boot
I assume these this is a better way to get list of dataset that should be taken into account when doing a backup.
True, zfs mount
looks more practical, it's output can be used as-is.
PS. To avoid launching another process, it might be worth to filter programmatically through /proc/mounts
and select only the lines containing zfs
:
saukrs@s2-book:~$ cat /proc/mounts | grep zfs
rpool/ROOT/ubuntu_ijuc69 / zfs rw,relatime,xattr,posixacl 0 0
rpool/USERDATA/root_jootn5 /root zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_ijuc69/srv /srv zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_ijuc69/usr/local /usr/local zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_ijuc69/var/spool /var/spool zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_ijuc69/var/mail /var/mail zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_ijuc69/var/games /var/games zfs rw,relatime,xattr,posixacl 0 0
rpool/USERDATA/s2_jootn5 /home/s2 zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_ijuc69/var/lib /var/lib zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_ijuc69/var/www /var/www zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_ijuc69/var/snap /var/snap zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_ijuc69/var/log /var/log zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_ijuc69/var/lib/apt /var/lib/apt zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_ijuc69/var/lib/NetworkManager /var/lib/NetworkManager zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_ijuc69/var/lib/dpkg /var/lib/dpkg zfs rw,relatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu_ijuc69/var/lib/AccountsService /var/lib/AccountsService zfs rw,relatime,xattr,posixacl 0 0
bpool/BOOT/ubuntu_ijuc69 /boot zfs rw,nodev,relatime,xattr,posixacl 0 0
Meanwhile, I'm taking my time learning the ins and outs of zfs snapshots. Currently studying a tool called znapzend to automate snapshots and backups.
It would be worth your time to learn as much as possible about zfs and do your own snapshots from the command line. The commands are really not difficult. But you do need to read up on them.
Well, I just learned that installing on ZFS is not an option in Linux Mint 22.0 "Wilma". There was a short period when it wasn't available on Ubuntu, either. But they brought ZFS install back AFAIK.
But, reading the release notes, it seems that LM came to it's decision for its own reasons. https://www.linuxmint.com/rel_wilma.php
Start using serious BSD/UNIX systems with real ZFS support instead of some Linux toys :p
Start using serious BSD/UNIX systems with real ZFS support instead of some Linux toys :p
This is a thread about Timeshift, which is maintained by the Linux Mint people. Your irrelevant statement is not going to change any minds. Or, are you here just for trolling purposes?
It was just a joke, poor troll attempt, do not get offended and do not treat it serious :)
Having Timeshift ported to FreeBSD would be kinda cool IMO (assuming ZFS functionality is added, of course).
@wallart1 commented 2 days ago:
Well, I just learned that installing on ZFS is not an option in Linux Mint 22.0 "Wilma". There was a short period when it wasn't available on Ubuntu, either. But they brought ZFS install back AFAIK.
But, reading the release notes, it seems that LM came to it's decision for its own reasons. https://www.linuxmint.com/rel_wilma.php
Thanks for the update. That's pretty sad. I wonder how Mint 21.3 on ZFS rootfs will behave during the upgrade to Mint 22.
Mint 21.3 was released on Jan 2024: https://blog.linuxmint.com/?p=4639
@wallart1 commented 2 days ago:
Well, I just learned that installing on ZFS is not an option in Linux Mint 22.0 "Wilma". There was a short period when it wasn't available on Ubuntu, either. But they brought ZFS install back AFAIK. But, reading the release notes, it seems that LM came to it's decision for its own reasons. https://www.linuxmint.com/rel_wilma.php
Thanks for the update. That's pretty sad. I wonder how Mint 21.3 on ZFS rootfs will behave during the upgrade to Mint 22.
Mint 21.3 was released on Jan 2024: https://blog.linuxmint.com/?p=4639
I tried it in a Virtualbox VM. After the upgrade, it would boot, but it crashed during login.
@wallart1 commented 2 days ago:
Well, I just learned that installing on ZFS is not an option in Linux Mint 22.0 "Wilma". There was a short period when it wasn't available on Ubuntu, either. But they brought ZFS install back AFAIK. But, reading the release notes, it seems that LM came to it's decision for its own reasons. https://www.linuxmint.com/rel_wilma.php
Thanks for the update. That's pretty sad. I wonder how Mint 21.3 on ZFS rootfs will behave during the upgrade to Mint 22. Mint 21.3 was released on Jan 2024: https://blog.linuxmint.com/?p=4639
I tried it in a Virtualbox VM. After the upgrade, it would boot, but it crashed during login.
My 21.3 upgrade worked fine, and still fine, on a proper disk; I've found Gnome Boxes easier, with Virtual Machine Manager useful to fix stupid-breakages, like Xubuntu not liking bridge NIC.
Describe the bug After installing Linux Mint 21 with the ZFS filesystem I noticed that timeshift is not able to take snapshots from it. When opening Timeshift it displays a message that says Live Mode (only recuperation), as if I am using the live USB, even though I have already completely installed mint on my device.
To Reproduce Steps to reproduce the behavior:
System: