linuxmint / timeshift

System restore tool for Linux. Creates filesystem snapshots using rsync+hardlinks, or BTRFS snapshots. Supports scheduled snapshots, multiple backup levels, and exclude filters. Snapshots can be restored while system is running or from Live CD/USB.
2.07k stars 78 forks source link

ZFS snapshots not supported in Timeshift #56

Open anaximeno opened 1 year ago

anaximeno commented 1 year ago

Describe the bug After installing Linux Mint 21 with the ZFS filesystem I noticed that timeshift is not able to take snapshots from it. When opening Timeshift it displays a message that says Live Mode (only recuperation), as if I am using the live USB, even though I have already completely installed mint on my device.

To Reproduce Steps to reproduce the behavior:

  1. Go to timeshift (after installing mint with ZFS) and open it.
  2. Error message: Live Mode (only recuperation)

System:

KAMI911 commented 1 year ago

It would be nice to add ZFS support.

anarcat commented 1 year ago

this looks like a duplicate of #25

hudsantos commented 1 year ago

There was a similar issue that once was opened on the original repo (not this fork):

https://github.com/teejee2008/timeshift/issues/529

...that was closed by teejee(timeshift creator), with this message: "ZFS has a weird disk layout that is not supported."

Maximum respect for Tony George, we all have been using his creation for years straight. Lets just take our time to contribute as we can.

The little I know about ZFS it really differs from filesystems that timeshift has been supporting very well so far.

ZFS is a rock solid stable 20+ years old project, from SUN/ORACLE, but as an option on desktop Linux distros where timeshift is popular is still very recent and because of that, it will probably take some time to see timeshift supporting that new scenario.

I really hope that timeshift, be it the original teejee's or this mint's fork, could support ZFS somehow.

leigh123linux commented 1 year ago

-1 to this unless zfs support gets included in main line kernel.

KAMI911 commented 1 year ago

@leigh123linux Ubuntu kernel already support ZFS, and the installer also can install Linux Mint or Ubuntu onto ZFS. Why do mainline kernel matter here?

sskras commented 1 year ago

Ubuntu kernel already support ZFS, and the installer also can install Linux Mint or Ubuntu onto ZFS. Why do mainline kernel matter here?

I second that. I just installed Mint 21 onto ZFS, and it's not there: image

Seems like @teejee2008 didn't like notion of the first field in the appropriate /proc/mounts line in the aforementioned issue:

image

Not sure what does it look like for the root-on-BTRFS.


The original repo is archived since Oct 16, 2022, it seems. I guess this means that folks interested in this feature can contribute to this repository now.

florianschroen commented 1 year ago

I would like to see zfs support, too. I can contribute with testing.

I guess, this would be much work. I had a quick look to the source, and saw that this a little bit against / aside the existing logic to guess / detect possible datastores (disks/volumes/filesystems) based on the serving device.

zfs is a whole in one solution for classical tools like lvm (volume management), luks (encryption) and ext/btrfs (filesystems).

backup to a zfs (sub)volume

My case would be backup a luks encrypted root on lvm to a 3-disk zfs-raidz1(Raid5). At my zfs pool are different volumes and subvolumes created for different purposes. Some of them use encryption, others don't.

Keeping this in mind, a user has to make sure prefered backup-volume is already setup and mounted. This applies to zfs backup-sources and -destinations.

A timeshift implementation has to look at the mountpoints of type zfs (grep zfs /proc/mounts), instead/in additon to the device detection.

With this behavior a rsync backup can be realized.

backup/snapshot a zfs (sub)volume

for using timeshift for backing up a zfs (volume) as @sskras mentioned, another mechanism needs to be implemented.

zfs has a snapshot function built in. The technical commands and options for zfs snapshot-management can be found in the zfs-autosnap source (https://github.com/rollcat/zfs-autosnap).

some example output from my system:

# disks and partitions (sdd lvm & sd[efg] for zfs)
$ lsblk
NAME                                           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sdd                                              8:48   1 119,2G  0 disk
├─sdd1                                           8:49   1 234,3M  0 part
└─sdd2                                           8:50   1   119G  0 part
  ├─ssdVG-root                                 253:0    0    40G  0 lvm
  │ └─root                                     253:1    0    40G  0 crypt /
  ├─ssdVG-boot                                 253:2    0     1G  0 lvm   /boot
sde                                              8:64   0   7,3T  0 disk
└─sde1                                           8:65   0   7,2T  0 part
sdf                                              8:80   0   7,3T  0 disk
└─sdf1                                           8:81   0   7,2T  0 part
sdg                                              8:96   0   7,3T  0 disk
└─sdg1                                           8:97   0   7,2T  0 part

$ zpool status usbpool
  pool: usbpool
 state: ONLINE
  scan: scrub repaired 0B in 14:36:51 with 0 errors on Sun Dec 11 15:00:53 2022
config:

        NAME                              STATE     READ WRITE CKSUM
        usbpool                           ONLINE       0     0     0
          raidz1-0                        ONLINE       0     0     0
            wwn-0x5000c500c6988b98-part1  ONLINE       0     0     0
            wwn-0x5000c500d0142180-part1  ONLINE       0     0     0
            wwn-0x5000c500cf99cd54-part1  ONLINE       0     0     0

errors: No known data errors

# list volumes of zfs pool "usbpool"
$ zfs list -r usbpool
NAME                                     USED  AVAIL     REFER  MOUNTPOINT
usbpool                                 7.11T  7.24T      155K  /usbpool
usbpool/backup                          7.01T  7.24T      341K  /usbpool/backup
usbpool/backup/clonezilla               10.0G  7.24T     10.0G  /usbpool/backup/clonezilla
usbpool/backup/proxmox                  5.82T  7.24T     5.77T  /usbpool/backup/proxmox
usbpool/backup/rsnapshot                1.18T  7.24T     1.14T  /usbpool/backup/rsnapshot

# list snapshots of volume "backup" in zfs pool "usbpool"
$ zfs list -t snapshot usbpool/backup | head
NAME                                                    USED  AVAIL     REFER  MOUNTPOINT
usbpool/backup@zfs-auto-snap_monthly-2022-07-12-0643      0B      -      330K  -
usbpool/backup@zfs-auto-snap_monthly-2022-08-12-0557      0B      -      330K  -
usbpool/backup@zfs-auto-snap_monthly-2022-09-11-0629      0B      -      330K  -
usbpool/backup@zfs-auto-snap_monthly-2022-10-11-0610      0B      -      330K  -
usbpool/backup@zfs-auto-snap_weekly-2022-10-21-0554       0B      -      330K  -
usbpool/backup@autosnap_2022-10-27_11:00:04_monthly       0B      -      330K  -
usbpool/backup@autosnap_2022-10-27_11:00:04_daily         0B      -      330K  -
usbpool/backup@autosnap_2022-10-28_00:00:04_daily         0B      -      330K  -
usbpool/backup@zfs-auto-snap_weekly-2022-10-28-0554       0B      -      330K  -

# list snapshots of sub-volume "backup/rsnapshot" in zfs pool "usbpool"
$ zfs list -t snapshot usbpool/backup/rsnapshot | head
NAME                                                              USED  AVAIL     REFER  MOUNTPOINT
usbpool/backup/rsnapshot@zfs-auto-snap_monthly-2022-07-12-0643   5.34G      -     1.14T  -
usbpool/backup/rsnapshot@zfs-auto-snap_monthly-2022-08-12-0557   4.91G      -     1.14T  -
usbpool/backup/rsnapshot@zfs-auto-snap_monthly-2022-09-11-0629   5.40G      -     1.14T  -
usbpool/backup/rsnapshot@zfs-auto-snap_monthly-2022-10-11-0610   2.90G      -     1.14T  -
usbpool/backup/rsnapshot@zfs-auto-snap_weekly-2022-10-21-0554    68.7M      -     1.14T  -
usbpool/backup/rsnapshot@autosnap_2022-10-27_11:00:03_monthly       0B      -     1.14T  -
usbpool/backup/rsnapshot@autosnap_2022-10-27_11:00:03_daily         0B      -     1.14T  -
usbpool/backup/rsnapshot@autosnap_2022-10-28_00:00:03_daily         0B      -     1.14T  -
usbpool/backup/rsnapshot@zfs-auto-snap_weekly-2022-10-28-0554     852K      -     1.14T  -

# listing/counting snapshots can be done recursive
$ zfs list -t snapshot usbpool/backup | wc -l
174
$ zfs list -r -t snapshot usbpool/backup | wc -l
822
$ zfs list -r -t snapshot usbpool | wc -l
1558

# mounts can be listed via /proc/mounts
$ grep zfs /proc/mounts
usbpool /usbpool zfs rw,xattr,noacl 0 0
usbpool/backup /usbpool/backup zfs rw,xattr,noacl 0 0
usbpool/backup/rsnapshot /usbpool/backup/rsnapshot zfs rw,xattr,posixacl 0 0
usbpool/backup/proxmox /usbpool/backup/proxmox zfs rw,xattr,noacl 0 0
usbpool/backup/clonezilla /usbpool/backup/clonezilla zfs rw,xattr,noacl 0 0
leigh123linux commented 1 year ago

@leigh123linux Ubuntu kernel already support ZFS, and the installer also can install Linux Mint or Ubuntu onto ZFS. Why do mainline kernel matter here?

Because ubuntu drops support for it's stupid ideas.

https://www.phoronix.com/news/Ubuntu-23.04-No-OpenZFS

anaximeno commented 1 year ago

I also wouldn't recommend installing Linux Mint with ZFS, BTRFS is a better option for a lot of reasons.

This YouTube video might be a good reference on the pros and cons of some of the most common Linux filesystems: https://youtu.be/G785-kxFH_M

sskras commented 1 year ago

@leigh123linux commented 3 days ago:

Because ubuntu drops support for it's stupid ideas.

https://www.phoronix.com/news/Ubuntu-23.04-No-OpenZFS

Can we stay only on facts and avoid being subjective? Eg:

Because ubuntu drops support for ZFS.

Thanks in advance:)

sskras commented 1 year ago

@anaximeno commented 3 days ago:

I also wouldn't recommend installing Linux Mint with ZFS, BTRFS is a better option for a lot of reasons.

These reasons doesn't seem to include ability to mount a degraded RAID-1: https://www.reddit.com/r/btrfs/comments/vf2u17/comment/icuf4ao/

anaximeno commented 1 year ago

These reasons doesn't seem to include ability to mount a degraded RAID-1: https://www.reddit.com/r/btrfs/comments/vf2u17/comment/icuf4ao/

It didn't include it, so thanks; now it does.

anaximeno commented 1 year ago

Let me add one thing I forgot to say in the comment about using BTRFS over ZFS.

BTRFS seems to be a better option for personal computers than ZFS, but this is arguable when talking about servers. The way ZFS uses RAM during its operations could be seen as good or bad depending on the conditions of your computer. From what I noticed it is not the best option when having a lower RAM capacity. BTRFS is far from perfect (there are also problems with RAID-5 and RAID-6 if am remembering it right) but it does have good support from the kernel mainline and it's becoming better and faster in each new release.

Well, this is my subjective opinion based on objective facts, I don't believe there's only one way of doing things, so people should use whatever they find is the best option for them.

sskras commented 1 year ago

It didn't include it, so thanks; now it does.

To me this means that some of your pros are cons for other people (in the context of using BTRFS). So it looks kind of subjective.

From what I noticed it is not the best option when having a lower RAM capacity.

I haven't been running machine with 512 megs of RAM + ZFS, but some folks were:

@vermaden tweeted on 1:06 AM · Apr 13, 2023:

I used 2 x 2TB ZFS mirror for YEARS with 512 MB of RAM on #FreeBSD and everything went smooth - my uptime was as long as I wanted to - usually ended by need for upgrade.

Just limit ARC size to 128 MB and nothing more needed.

My current 5TB of data backup boxes have 2 GB RAM.

So it's a matter of configuration effort and a performance trade-off, IMO.

anaximeno commented 1 year ago

It didn't include it, so thanks; now it does.

To me this means that some of your pros are cons for other people (in the context of using BTRFS). So it looks kind of subjective.

No. I do see that as a problem BTRFS has, and as such, I do include it when weighing between BTRFS and ZFS.

From what I noticed it is not the best option when having a lower RAM capacity.

I haven't been running machine with 512 megs of RAM + ZFS, but some folks were:

@vermaden tweeted on 1:06 AM · Apr 13, 2023:

I used 2 x 2TB ZFS mirror for YEARS with 512 MB of RAM on #FreeBSD and everything went smooth - my uptime was as long as I wanted to - usually ended by need for upgrade.

Just limit ARC size to 128 MB and nothing more

My current 5TB of data backup boxes have 2 GB RAM.

So it's a matter of configuration effort and a performance trade-off, IMO.

That's nice, I didn't know you can configure it that way.

anaximeno commented 1 year ago

I still do believe that support for ZFS should be added to Timeshift.

NicolasGoeddel commented 1 year ago

Since I installed Ubuntu with ZFS on my work laptop and I want to be able to use Timeshift, I would appreciate the effort.

vermaden commented 1 year ago

I would love to see ZFS support here - then a FreeBSD port would be lovely :)

Regards, vermaden

vermaden commented 1 year ago

Also the values (on FreeBSD) to set minimum and maximum ARC size are:

# grep arc /etc/sysctl.conf
vfs.zfs.arc_max: 536870912
vfs.zfs.arc_min: 134217728

The 'dots' syntax is also OK:

# grep arc /etc/sysctl.conf
vfs.zfs.arc.max: 536870912
vfs.zfs.arc.min: 134217728

I believe something similar (or identical) is available on Linux.

The above settings are for 128 MB minimum ans 512 MB maximum.

Usually ARC is still at maximum of that value - stat from several days of uptime from my laptop from htop(1) tool below.

ARC: 512M Used:241M MFU:75.3M MRU:27.3M Anon:9.73M Hdr:2.02M Oth:126M

But its for about 4TB of ZFS data and I have 32 GB RAM here so :)

If you want to really limit ARC then set minimum to 64MB and maximum to 128MB.

Regards, vermaden

bernd-wechner commented 1 year ago

Given the installer supports zfs (and not btrfs), seems a no-brainer to me that both Timeshift and the Disks utility should support zfs (neither currently does, cannot format a partition zfs with the Disks utility). So as I see it currently:

  1. Installer supports zfs but not btrfs as an easy option.
  2. Timeshift supports btrfs but not zfs
  3. Disks utility support formatting btrfs but not zfs

I'd see a strong case for one or both of:

  1. Support btrfs as an easy install option (I mean it can be done by manually creating partitions but still, given timeshift and disks support btrfs why not an easy install?)
  2. Support zfs in timeshift and disks (I mean if we can easily create a zfs install why not have two important utilities support it too?).

I'd prefer 1 or both personally, but hey, that'll be subjective and diverse views on priorities will exist. But btrfs is used by my routers and supported by Timeshift and Disks and so I'm more familiar with it than zfs and possibly it's a slightly leaner, simpler filesystem (can't comment confidently on that, inferred only by observation of its use on routers I have which are typically resource focussed and limited).

renatofrota commented 10 months ago

@leigh123linux Ubuntu kernel already support ZFS, and the installer also can install Linux Mint or Ubuntu onto ZFS. Why do mainline kernel matter here?

Because ubuntu drops support for it's stupid ideas.

https://www.phoronix.com/news/Ubuntu-23.04-No-OpenZFS

ZFS support is not dropped.

https://bugs.launchpad.net/ubuntu-desktop-installer/+bug/2000083

https://www.phoronix.com/news/Ubuntu-23.10-ZFS-Install

+1 for ZFS support in Timeshift :)

wallart1 commented 8 months ago

I would like to see zfs support, too. I can contribute with testing.

I guess, this would be much work. I had a quick look to the source, and saw that this a little bit against / aside the existing logic to guess / detect possible datastores (disks/volumes/filesystems) based on the serving device.

zfs is a whole in one solution for classical tools like lvm (volume management), luks (encryption) and ext/btrfs (filesystems).

backup to a zfs (sub)volume

My case would be backup a luks encrypted root on lvm to a 3-disk zfs-raidz1(Raid5). At my zfs pool are different volumes and subvolumes created for different purposes. Some of them use encryption, others don't.

Keeping this in mind, a user has to make sure prefered backup-volume is already setup and mounted. This applies to zfs backup-sources and -destinations.

A timeshift implementation has to look at the mountpoints of type zfs (grep zfs /proc/mounts), instead/in additon to the device detection.

With this behavior a rsync backup can be realized.

backup/snapshot a zfs (sub)volume

for using timeshift for backing up a zfs (volume) as @sskras mentioned, another mechanism needs to be implemented.

zfs has a snapshot function built in. The technical commands and options for zfs snapshot-management can be found in the zfs-autosnap source (https://github.com/rollcat/zfs-autosnap).

some example output from my system:

# disks and partitions (sdd lvm & sd[efg] for zfs)
$ lsblk
NAME                                           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sdd                                              8:48   1 119,2G  0 disk
├─sdd1                                           8:49   1 234,3M  0 part
└─sdd2                                           8:50   1   119G  0 part
  ├─ssdVG-root                                 253:0    0    40G  0 lvm
  │ └─root                                     253:1    0    40G  0 crypt /
  ├─ssdVG-boot                                 253:2    0     1G  0 lvm   /boot
sde                                              8:64   0   7,3T  0 disk
└─sde1                                           8:65   0   7,2T  0 part
sdf                                              8:80   0   7,3T  0 disk
└─sdf1                                           8:81   0   7,2T  0 part
sdg                                              8:96   0   7,3T  0 disk
└─sdg1                                           8:97   0   7,2T  0 part

$ zpool status usbpool
  pool: usbpool
 state: ONLINE
  scan: scrub repaired 0B in 14:36:51 with 0 errors on Sun Dec 11 15:00:53 2022
config:

        NAME                              STATE     READ WRITE CKSUM
        usbpool                           ONLINE       0     0     0
          raidz1-0                        ONLINE       0     0     0
            wwn-0x5000c500c6988b98-part1  ONLINE       0     0     0
            wwn-0x5000c500d0142180-part1  ONLINE       0     0     0
            wwn-0x5000c500cf99cd54-part1  ONLINE       0     0     0

errors: No known data errors

# list volumes of zfs pool "usbpool"
$ zfs list -r usbpool
NAME                                     USED  AVAIL     REFER  MOUNTPOINT
usbpool                                 7.11T  7.24T      155K  /usbpool
usbpool/backup                          7.01T  7.24T      341K  /usbpool/backup
usbpool/backup/clonezilla               10.0G  7.24T     10.0G  /usbpool/backup/clonezilla
usbpool/backup/proxmox                  5.82T  7.24T     5.77T  /usbpool/backup/proxmox
usbpool/backup/rsnapshot                1.18T  7.24T     1.14T  /usbpool/backup/rsnapshot

# list snapshots of volume "backup" in zfs pool "usbpool"
$ zfs list -t snapshot usbpool/backup | head
NAME                                                    USED  AVAIL     REFER  MOUNTPOINT
usbpool/backup@zfs-auto-snap_monthly-2022-07-12-0643      0B      -      330K  -
usbpool/backup@zfs-auto-snap_monthly-2022-08-12-0557      0B      -      330K  -
usbpool/backup@zfs-auto-snap_monthly-2022-09-11-0629      0B      -      330K  -
usbpool/backup@zfs-auto-snap_monthly-2022-10-11-0610      0B      -      330K  -
usbpool/backup@zfs-auto-snap_weekly-2022-10-21-0554       0B      -      330K  -
usbpool/backup@autosnap_2022-10-27_11:00:04_monthly       0B      -      330K  -
usbpool/backup@autosnap_2022-10-27_11:00:04_daily         0B      -      330K  -
usbpool/backup@autosnap_2022-10-28_00:00:04_daily         0B      -      330K  -
usbpool/backup@zfs-auto-snap_weekly-2022-10-28-0554       0B      -      330K  -

# list snapshots of sub-volume "backup/rsnapshot" in zfs pool "usbpool"
$ zfs list -t snapshot usbpool/backup/rsnapshot | head
NAME                                                              USED  AVAIL     REFER  MOUNTPOINT
usbpool/backup/rsnapshot@zfs-auto-snap_monthly-2022-07-12-0643   5.34G      -     1.14T  -
usbpool/backup/rsnapshot@zfs-auto-snap_monthly-2022-08-12-0557   4.91G      -     1.14T  -
usbpool/backup/rsnapshot@zfs-auto-snap_monthly-2022-09-11-0629   5.40G      -     1.14T  -
usbpool/backup/rsnapshot@zfs-auto-snap_monthly-2022-10-11-0610   2.90G      -     1.14T  -
usbpool/backup/rsnapshot@zfs-auto-snap_weekly-2022-10-21-0554    68.7M      -     1.14T  -
usbpool/backup/rsnapshot@autosnap_2022-10-27_11:00:03_monthly       0B      -     1.14T  -
usbpool/backup/rsnapshot@autosnap_2022-10-27_11:00:03_daily         0B      -     1.14T  -
usbpool/backup/rsnapshot@autosnap_2022-10-28_00:00:03_daily         0B      -     1.14T  -
usbpool/backup/rsnapshot@zfs-auto-snap_weekly-2022-10-28-0554     852K      -     1.14T  -

# listing/counting snapshots can be done recursive
$ zfs list -t snapshot usbpool/backup | wc -l
174
$ zfs list -r -t snapshot usbpool/backup | wc -l
822
$ zfs list -r -t snapshot usbpool | wc -l
1558

# mounts can be listed via /proc/mounts
$ grep zfs /proc/mounts
usbpool /usbpool zfs rw,xattr,noacl 0 0
usbpool/backup /usbpool/backup zfs rw,xattr,noacl 0 0
usbpool/backup/rsnapshot /usbpool/backup/rsnapshot zfs rw,xattr,posixacl 0 0
usbpool/backup/proxmox /usbpool/backup/proxmox zfs rw,xattr,noacl 0 0
usbpool/backup/clonezilla /usbpool/backup/clonezilla zfs rw,xattr,noacl 0 0

Timeshift could simply create and manage its own ZFS snapshots and disregard the others. If you want your own snapshots outside of Timeshift, go ahead. Nothing would prevent that.

ivancarlosti commented 7 months ago

I would love to see support for ZFS :/

liberodark commented 6 months ago

Hi,

Any update for support ZFS snapshot ?

Best Regards

sskras commented 6 months ago

image

No one is assigned to work on this issue :)

andreampiovesana commented 3 months ago

like to have support for ZFS

ShayBox commented 3 months ago

Timeshift ZFS support should probably happen, it's gaining popularity, and starting to become an option or default for some distros. ZFS is considered stable and has been battle tested on servers for a long time, while BTRFS has only lost support from large projects and having lost my data 5 times to BTRFS corruption, I refuse to touch it again.

wallart1 commented 2 months ago

Ok. I'm going to jump in and attempt this. I'm an old programmer, but I've never done anything like this before. There will be a huge learning curve ahead. But, being retired, I do have the time.

I was concerned that Ubuntu was going to drop ZFS support in its installer. But, the latest LTR 24.04 seems to have it.

The biggest obstacle for me is that Timeshift is written in a programming language (Vala) that I would prefer not to learn. Rather, I'd like to use Python. So, what about that? Is there a way for me to use Python without making a complete mess of the project? Should I start over with a new project?

vermaden commented 2 months ago

@wallart1

I may have some potential good news for You.

This is the 'Time Slider' from OpenIndiana/OpenSolaris/Illumos implemented in Caja file manager from Mate.

openindiana-2019 10-caja-time-slider

I believe someone already ported it to Linux here:

... and its in Python.

Hope that helps.

wallart1 commented 2 months ago

Oooo. That's extremely interesting. Thank you so much for that.

laserburn commented 2 months ago

Here is another upvote for zfs support for Timeshift. I was looking looking at zfs snapshot managers for my fresh Ubuntu install and they are all little more than shell scrypts or front ends for zfs commands. Fine for servers, but clunky for simple PC home use. A nice visual took like Timeshift is sorely needed.

wallart1 commented 2 months ago

Ok. I'm going to jump in and attempt this. I'm an old programmer, but I've never done anything like this before. There will be a huge learning curve ahead. But, being retired, I do have the time.

I was concerned that Ubuntu was going to drop ZFS support in its installer. But, the latest LTR 24.04 seems to have it.

The biggest obstacle for me is that Timeshift is written in a programming language (Vala) that I would prefer not to learn. Rather, I'd like to use Python. So, what about that? Is there a way for me to use Python without making a complete mess of the project? Should I start over with a new project?

I've been doing some research, on and off. I ran across a project by the Canonical staff which is called Zsys. This is exactly the type of backend that I think my project needs. Here is an excellent write-up by the developer.

There are some unfortunate aspects to this project, however: 1) The project is still in "experimental" status. It is at the stage where some complex issues are being discussed. 2) The project is on hold because of conflicting priorities at Canonical. 3) It is written in Golang, so I'd have trouble contributing to it.

It is available to install from the Ubuntu repository (in Software Manager or APT), and the installation is dead simple.

I've also looked at Sanoid and ZFSBootMenu. These seem like overkill, and the latter seems to require that the operating system(s) be (re)installed on top of it -- a showstopper, IMO.

So, I guess that I'm waiting on Canonical to make the next move. :/

sskras commented 2 months ago

@wallart1: Ummm, why would supporting ZFS snapshots need any boot-related tools like Zsys or ZFSBootMenu ? Without having any RnD done I somehow still don't get that part.

wallart1 commented 2 months ago

@sskras, thanks for your question.

I use Timeshift when doing maintenance on the OS or on apps that are installed on the OS, and also when I am about to do something that has an elevated risk to the OS. It is normally the case that, if something goes wrong, the OS can still be fully booted, and I can use Timeshift to revert the changes. But, it can also be the case that I did something that makes the OS unbootable. So, I want to be able to roll back changes before boot time.

I realize that Timeshift doesn't have this feature. This is an easy opportunity to make this project better than that.

If you have read the write-up that I pointed out in a previous post, you will realize that "taking a snapshot" from inside an Ubuntu ZFS system is more complex than it first seems. The author/developer had to invent the concept of a "system state" that represents all of the many datasets, snapshots and attributes that exist when the system state is saved. So, simply rolling back a single snapshot after you've messed something up is not sufficient.

Now, I happen to have a server that runs a hypervisor (Proxmox in my case) which makes saving and reverting the state of a virtual machine a very simple matter. It only has one or two virtual disks to worry about. But, my daily driver system does not have a hypervisor. This is where having a pre-boot rollback capability is critical, in my view.

Of course, the other use for snapshots is to save the state of user data. I don't think that Timeshift handles this very well. It looks like an afterthought to me. Zsys handles this out of the box without giving it a second thought. (However, this is an area where a major discussion exists. See, for example.)

If I tried to rewrite Zsys (because the existing one is written in a language that I don't have expertise in), then the scope of the project would go far beyond anything I am willing to tackle.

I suppose I could begin with just the capability to manage snapshots of user data. But that is not really the intent of the project, as I see it. There are other backup programs that do a better job of this than I could.

At the same time, I am interested in hearing from people what features they want. So, again, thanks for your question. :-)

P.S.: And, as always, remember that snapshots are not substitutes for backups!