openzfs / zfs

OpenZFS on Linux and FreeBSD
https://openzfs.github.io/openzfs-docs
Other
10.63k stars 1.75k forks source link

Distribution installers (from iso) should do more ZFS #14355

Open mcladams opened 1 year ago

mcladams commented 1 year ago

Edit. below discribes distribution actions outwith OpenZFS

OpenZFS should facilitate zfs as on option; such as btrfs is; on multiple distributions; without requring wiping of an entire drive.

Describe the feature would like to see added to OpenZFS

Users can install to root zfs without wiping entire disk via mutliple distribution live installer isos.

How will this feature improve OpenZFS?

Users will no longer be in fear of installing a distribution to zfs due to current behaviour of requring a full disk.

Additional context

distribtuiton installers that actually provide zfs as an option should not need an entire drive

rincebrain commented 1 year ago

It's not clear to me what you're referring to when you say "requires an entire drive". Are you thinking of a specific installer which does this? Because OpenZFS certainly doesn't require that.

mcladams commented 1 year ago

I replied but may have been lost.. The few distribution installers that offer to install to zfs; require a full disk. Am I "thinking of a specific distribution installer that requires a full disk"? All of them. I'll be happily proved wrong.

mskarbek commented 1 year ago

@mclad and what exactly do you expect from the OpenZFS maintainers? That is a distribution/installer issue. OpenZFS will happily create a pool on a partition if installers provide one, but it is an installer job to do that.

mcladams commented 1 year ago

Ok if you think your work is done,.. why can't I install into /dev/sdx with zfs. Make your mind whether you(plural) are only zfs for massive server installations or provide equivalent support and respect to desktop users.

What I expect from zfs-mainters? I guess better dialog with the major distributions such that their installers can install to zfs root in partition/dev/sdx Or in mirror or zraid constellations.

The proxmox-ve installer is perhaps the best example of what's possible. Still requiring entire disks.

Install to root zfs should be as supported and no more complex as install to eg btrfs

On Sat, 7 Jan 2023, 5:57 am Marcin Skarbek, @.***> wrote:

@mclad https://github.com/mclad and what exactly do you expect from the OpenZFS maintainers? That is a distribution/installer issue. OpenZFS will happily create a pool on a partition if installers provide one, but it is an installer job to do that.

— Reply to this email directly, view it on GitHub https://github.com/openzfs/zfs/issues/14355#issuecomment-1374181356, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACSKZNCIEVHC4MJ3KOSBSYDWRCILFANCNFSM6AAAAAATSVHBJI . You are receiving this because you were mentioned. Message ID: @.***>

mcladams commented 1 year ago

Additionally and in particular, how many OpenZFS maintainers come from from a desktop background as opposed to servers. Yes zfs is incredibly incredible with draid or just massive vdev sets of mirrors with a hundred 14TB drives.

Open tools like zfsbootmenu and sanoid provide functionality to desktops and this use case will still only grow.

On my testing box I have,.. 12 distributions booting seamlessly with zfsbootmenu. For Dev and pen-testing purposes. But in every single case; I installed to an ext4 partition then did the rsync to waiting zfs datasets , then chroot, genfstab, install or build zfs, update whatever init, etc. Use cases such as mine will become more frequent.

Advanced installation options would be bliss such as: Install to zroot/ROOT/distro/bootenv

Meta and off topic: Or even just install to /target I have prepared and mounted previously with whatever firestarter

On Sat, 7 Jan 2023, 5:57 am Marcin Skarbek, @.***> wrote:

@mclad https://github.com/mclad and what exactly do you expect from the OpenZFS maintainers? That is a distribution/installer issue. OpenZFS will happily create a pool on a partition if installers provide one, but it is an installer job to do that.

— Reply to this email directly, view it on GitHub https://github.com/openzfs/zfs/issues/14355#issuecomment-1374181356, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACSKZNCIEVHC4MJ3KOSBSYDWRCILFANCNFSM6AAAAAATSVHBJI . You are receiving this because you were mentioned.Message ID: @.***>

ryao commented 1 year ago

While servers pay the bills for me right now, my original reason for doing ZFS development was for home use. Many of my early patches were aimed at that. Many of the patches I do today benefit both equally. I still make extensive use of ZFS in my home.

That said, a younger version of myself had dismissed the idea of in place filesystem conversion since the two different filesystems should have different alignment requirements and I was only thinking about the btrfs tool that supposedly left all data in place. However, thinking about it now: if we assume moving things around is okay as long as restoring it to a consistent ext4 state works, then I suspect that it is doable. If I think about Microsoft's one way FAT to NTFS conversion that does not allow for going back, then it seems even more doable, even though trying to fabricate a merkle tree that is preinitialized with data is likely to be a pain.

It would still need some background research that I cannot immediately do, especially since I already have a few projects taking my attention right now, but I will say this. You have piqued my interest.

mcladams commented 1 year ago

Offtopic: First thing I thought of when learning of zfs only late last decade was I need this for data integrity. Almost immediately following by, this will be perfect for my multiboot requirements.

I have an aversion to testing something properly in a VM, I feel I need to run it on metal. ZFS snapshots and now zfsbootmenu make it easy. I'm just waiting until my zroot/ROOT/distro/bootenv heirarchy can be joined by windows up in there for the cases like legacy VBA code... And an odd game of two.

And when I can run any bootable on metal dataset under KVM from whichever other distribution... Wake me I'm obviously dreaming.

On Sat, 7 Jan 2023, 10:52 am Richard Yao, @.***> wrote:

While servers pay the bills for me right now, my original reason for doing ZFS development was for home use. Many of my early patches were aimed at that. Many of the patches I do today benefit both equally. I still make extensive use of ZFS in my home.

That said, a younger version of myself had dismissed the idea of in place filesystem conversion since the two different filesystems should have different alignment requirements. However, thinking about it now: if we assume moving things around is okay as long as restoring it to a consistent ext4 state works, then I suspect that it is doable. If I think about Microsoft's one way FAT to NTFS conversion that does not allow for going back, then it seems even more doable, even though trying to fabricate a merkle tree that is preinitialized with data is likely to be a pain.

It would still need some background research that I cannot immediately do, especially since I already have a few projects taking my attention right now, but I will say this. You have piqued my interest.

— Reply to this email directly, view it on GitHub https://github.com/openzfs/zfs/issues/14355#issuecomment-1374358710, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACSKZNBQHI466TBI43L7VEDWRDK5LANCNFSM6AAAAAATSVHBJI . You are receiving this because you were mentioned.Message ID: @.***>

mskarbek commented 1 year ago

@mclad still, what do you expect from OpenZFS maintainers? You can't expect dialog when there is no will for one in the beginning. What you describe is possible without any changes on the OpenZFS side. Only changes needed are to be made on the installers side. So if you want better support for example in Ubuntu go and talk to the Canonical. Just remember that because of licensing "issues" only few distributions attempted to integrate OpenZFS and even fewer brought that work to the somewhat usable state. Don't expect that OpenZFS maintainers will maintain forks of every distribution installer. They won't. Its installers maintainers job to add proper support. They have everything they need to do so from the OpenZFS perspective.

mcladams commented 1 year ago

@mskarbek

Firstly I expect dialogue hence this is a feature request not a bug report.

(and just for reference it late in gmt +8 and I'm australian so some comments may be overly jaunty) Zeroly, the presumption that as Im unkownl am only now making issues on github and elsewhere and replying to others does not preculude the fact I've read almost everything from the docs sites and links; and been tesitng zfs edge cases for may years. Your suggestion

still, what do you expect from OpenZFS maintainers? You can't expect dialog when there is no will for one in the beginning. What you describe is possible without any changes on the OpenZFS side. Only changes needed are to be made on the installers side. So if you want better support for example in Ubuntu go and talk to the Canonical.

So because I test a dozen distrubitions on zfs 2.1.7 on root with zfsbootmenu ... I as unknown, relatively unschooled, should approach canonical and another dozen distrubutions ? Is that not something zfsdevs would have the contatcts for?

Licencing issues is a f..rigging smokescreen over excuses because its too hard.

FFS I installed btrfs on a raid6 setup mid 00s. BTRFS although not technically comparable in my opinoons is now a breeze to install to mirror or raid installations by comparison to back then, and zfs now.

Heres zls from one tesing box i'm on currently. zfs list fails in not exposing canmount and mounted properties by default. My use case is testing various distributions. (alias zls='zfs list -o name,used,referenced,canmount,mounted,mountpoint')

root@lunar-gamer:~# zls -r zroot
NAME                                 USED     REFER  CANMOUNT  MOUNTED  MOUNTPOINT
zroot                                121G      244K  off       no       none
zroot/DATA                          3.25G      288K  off       no       /data
zroot/DATA/media                    2.19G     2.09G  on        yes      /data/media
zroot/DATA/projects                  480M      303M  on        yes      /data/projects
zroot/DATA/projects/ref              192K      192K  on        yes      /data/projects/ref
zroot/DATA/storage                   606M      606M  -         -        -
zroot/DATA/vm                        192K      192K  on        yes      /data/vm
zroot/DATA/zvol                      192K      192K  off       no       none
zroot/LINUX                         5.81G      192K  off       no       /
zroot/LINUX/opt                     3.85G     3.67G  on        yes      /opt
zroot/LINUX/srv                      272K      192K  on        yes      /srv
zroot/LINUX/usr                      245M      192K  off       no       /usr
zroot/LINUX/usr/local                245M      241M  on        yes      /usr/local
zroot/LINUX/var                     1.72G      192K  off       no       /var
zroot/LINUX/var/lib                 1.72G      192K  off       no       /var/lib
zroot/LINUX/var/lib/containers       192K      192K  off       no       /var/lib/containers
zroot/LINUX/var/lib/snapd           1.72G      520K  noauto    no       /var/lib/snapd
zroot/LINUX/var/lib/snapd/snaps     1.72G     1.72G  noauto    no       /var/lib/snapd/snaps
zroot/ROOT                           111G      192K  off       no       none
zroot/ROOT/debian11                 24.8G      192K  off       no       none
zroot/ROOT/debian11/console         1.19G     1.06G  noauto    no       /
zroot/ROOT/debian11/home            4.09G     4.00G  noauto    no       /home
zroot/ROOT/debian11/mx21-fluxbox    1.49G     2.35G  noauto    no       /
zroot/ROOT/debian11/pve-console     2.34G     3.18G  noauto    no       /
zroot/ROOT/debian11/pve-mystery     3.29G     3.29G  noauto    no       /
zroot/ROOT/debian11/pve30-cli       5.02G     5.61G  noauto    no       /
zroot/ROOT/debian11/pve30-gnm       6.82G     6.88G  noauto    no       /
zroot/ROOT/debian11/root             527M      525M  noauto    no       /root
zroot/ROOT/debtesting               18.8G      192K  off       no       none
zroot/ROOT/debtesting/console       1.36G     1.20G  noauto    no       /
zroot/ROOT/debtesting/home           548M      492M  noauto    no       /home
zroot/ROOT/debtesting/kaisen_kde    87.7M     14.6G  on        no       none
zroot/ROOT/debtesting/kaisen_lxqt   16.8G     14.6G  noauto    no       /
zroot/ROOT/debtesting/root          44.7M     33.5M  noauto    no       /root
zroot/ROOT/fedora36                 12.5G      192K  off       no       none
zroot/ROOT/fedora36/home             400M      400M  noauto    no       /home
zroot/ROOT/fedora36/nobara          12.2G     8.89G  noauto    no       /
zroot/ROOT/fedora36/root             584K      308K  noauto    no       /root
zroot/ROOT/ubuntu2204               8.48G      192K  off       no       /
zroot/ROOT/ubuntu2204/gnome         7.17G     5.33G  noauto    no       /
zroot/ROOT/ubuntu2204/home           612M      333M  noauto    no       /home
zroot/ROOT/ubuntu2204/root           201M      199M  noauto    no       /root
zroot/ROOT/ubuntu2204/server         519M     5.98G  noauto    no       /
zroot/ROOT/ubuntu2304               41.3G      192K  off       no       none
zroot/ROOT/ubuntu2304/gnome-nosnap  25.7G     15.1G  noauto    no       /
zroot/ROOT/ubuntu2304/home          6.27G     2.41G  noauto    yes      /home
zroot/ROOT/ubuntu2304/root          20.9M     14.6M  noauto    yes      /root
zroot/ROOT/ubuntu2304/studio-kde    9.29G     11.2G  noauto    yes      /
zroot/ROOT/void                     5.31G      192K  off       no       none
zroot/ROOT/void/home                 192K      192K  noauto    no       /home
zroot/ROOT/void/root                 192K      192K  noauto    no       /root
zroot/ROOT/void/void-xcfe           5.31G     5.31G  noauto    no       /

And I have funcion zlsm() { zls $@ | grep -e ' on ' -e ' yes ' ; }

root@lunar-gamer:~# zlsm
vault/data/media                    4.39G      302M  on        yes      /data/media
vault/data/opt                        96K       96K  on        yes      /data/opt
vault/devops/PVE/vz                 89.1G     5.01G  on        yes      /var/lib/vz
vault/media/APP/downloads           53.0G     53.0G  on        yes      /share/downloads
vault/media/APP/glob                20.6G      104G  on        yes      /share/glob
vault/media/APP/library_pc           176G      176G  on        yes      /share/library_pc
vault/media/LINUX/lxsteam           2.08G     1.58G  on        yes      /home/mike/.local/Steam
vault/media/MUSIC/dj_bylabel         167G      167G  on        yes      /share/dj_bylabel
vault/media/video/library            139G      139G  on        yes      /share/library
zroot/DATA/media                    2.19G     2.09G  on        yes      /data/media
zroot/DATA/projects                  480M      303M  on        yes      /data/projects
zroot/DATA/projects/ref              192K      192K  on        yes      /data/projects/ref
zroot/DATA/vm                        192K      192K  on        yes      /data/vm
zroot/LINUX/opt                     3.85G     3.67G  on        yes      /opt
zroot/LINUX/srv                      272K      192K  on        yes      /srv
zroot/LINUX/usr/local                245M      241M  on        yes      /usr/local
zroot/ROOT/debtesting/kaisen_kde    87.7M     14.6G  on        no       none
zroot/ROOT/ubuntu2304/home          6.27G     2.41G  noauto    yes      /home
zroot/ROOT/ubuntu2304/root          20.9M     14.6M  noauto    yes      /root
zroot/ROOT/ubuntu2304/studio-kde    9.29G     11.2G  noauto    yes      /
mcladams commented 1 year ago

Offtopic: Other test boxes are more arch, void and nix focused. Lets face it, cananoical has suicided ubuntu for many with zsys and snapd; and given arch kde on Steam Deck, arch will eventually win.

1 on distrowatch is MX-linux which does not have systemd by default

2 is endeavourOS based on Arch

whatever is Void Linux which natively installs zfsbootmenu; which i use on every system by default after some testing last year. I can't explain the pleasure I felt finally being able to run apt purge zsys

Fabian-Gruenbichler commented 1 year ago

Proxmox dev here - our installer only supports full disks as installation target on purpose. It's a simple, fast, straight-forward bare metal installer that is not supposed to cover every use case under the sun - it's purpose is to get a usable, sane install onto your server in a few minutes without having to answer hundreds of questions. You can always use a live-CD + debootstrap if you want to have a custom/niche setup that is fully under your control, or re-use the more customizable Debian installer and install Proxmox products on top.

mcladams commented 1 year ago

Nice. So much respect for proxmox. Massively used in academic communities i.e. poor students such at I.

I know it's edge use like most of things I do; I install proxmox to zfs on smallest SSD I have with the have most of my GB unformatted option, then portable via gparted copy, clonezilla or zfs send recv operations.

Or just install proxmox-ve on something such as LMDE 5 on ext4, then rsync to zfs, which works lovely for a dev box.

Outwith proxmox, beyond able to handle firmware such at Nvidia or recent amdgpu makes installers just a little easier than debootsrap / mmdebstrap

My idea is an advanced installer will just say say, you've mounted /target ? Fine I'll install there, then advanced user, do what you need in chroot before rebooting.

Cheers, Mike

On Mon, 9 Jan 2023, 5:09 pm Fabian-Gruenbichler, @.***> wrote:

Proxmox dev here - our installer only supports full disks as installation target on purpose. It's a simple, fast, straight-forward bare metal installer that is not supposed to cover every use case under the sun - it's purpose is to get a usable, sane install onto your server in a few minutes without having to answer hundreds of questions. You can always use a live-CD + debootstrap if you want to have a custom/niche setup that is fully under your control, or re-use the more customizable Debian installer and install Proxmox products on top.

— Reply to this email directly, view it on GitHub https://github.com/openzfs/zfs/issues/14355#issuecomment-1375301942, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACSKZNHOUPT5DLTWMVE6TULWRPITXANCNFSM6AAAAAATSVHBJI . You are receiving this because you were mentioned.Message ID: @.***>

mcladams commented 1 year ago

Offtopic: Proxmox anything, I always put /var/lib/pve-cluster on his own zfs dataset, first thing. Than I can test pve with different boot environments but with same that dir and /etc/pve

On Mon, 9 Jan 2023, 5:09 pm Fabian-Gruenbichler, @.***> wrote:

Proxmox dev here - our installer only supports full disks as installation target on purpose. It's a simple, fast, straight-forward bare metal installer that is not supposed to cover every use case under the sun - it's purpose is to get a usable, sane install onto your server in a few minutes without having to answer hundreds of questions. You can always use a live-CD + debootstrap if you want to have a custom/niche setup that is fully under your control, or re-use the more customizable Debian installer and install Proxmox products on top.

— Reply to this email directly, view it on GitHub https://github.com/openzfs/zfs/issues/14355#issuecomment-1375301942, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACSKZNHOUPT5DLTWMVE6TULWRPITXANCNFSM6AAAAAATSVHBJI . You are receiving this because you were mentioned.Message ID: @.***>

mcladams commented 1 year ago

Elsewhere I made a detailed explanation of how I can have any distribution, even like Ubuntu Server 1804 or Fedora 36; any Arch, eventually be with latest openZFS and zfsbootmenu. Easy for me now but not the inexperienced. All issue suggestions I make on zfs is from many many late nights experimenting and failing until I don't.

On Mon, 9 Jan 2023, 5:09 pm Fabian-Gruenbichler, @.***> wrote:

Proxmox dev here - our installer only supports full disks as installation target on purpose. It's a simple, fast, straight-forward bare metal installer that is not supposed to cover every use case under the sun - it's purpose is to get a usable, sane install onto your server in a few minutes without having to answer hundreds of questions. You can always use a live-CD + debootstrap if you want to have a custom/niche setup that is fully under your control, or re-use the more customizable Debian installer and install Proxmox products on top.

— Reply to this email directly, view it on GitHub https://github.com/openzfs/zfs/issues/14355#issuecomment-1375301942, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACSKZNHOUPT5DLTWMVE6TULWRPITXANCNFSM6AAAAAATSVHBJI . You are receiving this because you were mentioned.Message ID: @.***>

mcladams commented 1 year ago

Off-topic: Final offtopic. Fabian I'll find you or different proxmox devs elsewhere but I'll just leave this here briefly while I have time: after installation and dataset creation... Or mounting /var/lib/pve-cluster is/var/lib/vz/templates for KVM and lxc Which is filled with symlinks of isos and tar.gz either from where. Old box on icsci currently. I've made latest OMV with with latest PVE in LXC not KVM but an inelegant hack not worth the effort. I'll reply such in PVE and OMV forums where can "we run OMV on LXC on ZFS" is a neverending repeated question.

On Mon, 9 Jan 2023, 8:53 pm Mike Adams, @.***> wrote:

Nice. So much respect for proxmox. Massively used in academic communities i.e. poor students such at I.

I know it's edge use like most of things I do; I install proxmox to zfs on smallest SSD I have with the have most of my GB unformatted option, then portable via gparted copy, clonezilla or zfs send recv operations.

Or just install proxmox-ve on something such as LMDE 5 on ext4, then rsync to zfs, which works lovely for a dev box.

Outwith proxmox, beyond able to handle firmware such at Nvidia or recent amdgpu makes installers just a little easier than debootsrap / mmdebstrap

My idea is an advanced installer will just say say, you've mounted /target ? Fine I'll install there, then advanced user, do what you need in chroot before rebooting.

Cheers, Mike

On Mon, 9 Jan 2023, 5:09 pm Fabian-Gruenbichler, @.***> wrote:

Proxmox dev here - our installer only supports full disks as installation target on purpose. It's a simple, fast, straight-forward bare metal installer that is not supposed to cover every use case under the sun - it's purpose is to get a usable, sane install onto your server in a few minutes without having to answer hundreds of questions. You can always use a live-CD + debootstrap if you want to have a custom/niche setup that is fully under your control, or re-use the more customizable Debian installer and install Proxmox products on top.

— Reply to this email directly, view it on GitHub https://github.com/openzfs/zfs/issues/14355#issuecomment-1375301942, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACSKZNHOUPT5DLTWMVE6TULWRPITXANCNFSM6AAAAAATSVHBJI . You are receiving this because you were mentioned.Message ID: @.***>

grahamperrin commented 1 year ago

FreeBSD versions 13.1 and greater install to OpenZFS by default.

https://docs.freebsd.org/en/books/handbook/book/#bsdinstall-part-zfs


https://github.com/openzfs/zfs/issues/14355#issue-1521885883

… should not need an entire drive

https://docs.freebsd.org/en/books/handbook/book/#bsdinstall-part-manual

mcladams commented 1 year ago

Thanks for the reply. But I quote from that btw excellent manual 2.6.4 Guided zFS partitioning/installation This partitioning mode only works with whole disks and will erase the contents of the entire disk

The Proxmox-VE installer is slightly better because it gives the option to leave however much unpartitioned space at the end of the disk[s] the user wants. I like have a recovery distro installed there and swap.

https://pve.proxmox.com/wiki/Installation Advanced ZFS Configuration Options

The installer creates the ZFS pool rpool. No swap space is created but you can reserve some unpartitioned space on the install disks for swap. You can also create a swap zvol after the installation, although this can lead to problems. (see ZF https://pve.proxmox.com/wiki/ZFS_on_Linux#zfs_swap

On Sun, 22 Jan 2023, 2:05 am Graham Perrin, @.***> wrote:

FreeBSD https://www.freebsd.org/ versions 13.1 and greater install to OpenZFS by default.

https://docs.freebsd.org/en/books/handbook/book/#bsdinstall-part-zfs

14355 (comment)

https://github.com/openzfs/zfs/issues/14355#issue-1521885883

… should not need an entire drive

https://docs.freebsd.org/en/books/handbook/book/#bsdinstall-part-manual

— Reply to this email directly, view it on GitHub https://github.com/openzfs/zfs/issues/14355#issuecomment-1399301883, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACSKZNB36QJN2VW2EC7CU23WTQQP3ANCNFSM6AAAAAATSVHBJI . You are receiving this because you were mentioned.Message ID: @.***>