Open zamentur opened 3 years ago
I agree with the flexibility and reliability of the self-healing ZFS. It also allows to compress, encrypt and/or de-duplicate storage. The snapshot capability simplifies incremental backups.
In fact I was planning on running Yunohost on a RaspberryPi 4 8 GB RAM with a Dual-SATA-USB adapter and two SSDs. When I realized Yunohost doesn't support Raid1/Mirroring I installed Yunohost in a ZVOL on my Proxmox Virtual Environment host. Automatic backup and compression of a 1 TB Ext4 virtual disk with only 1.4 GB data is a pain in the ass. The Snapshot capability of ZFS would be quite useful.
Debian Buster Root on ZFS Debian 10 Root on Encrypted ZFS
Including pre-compiled ZFS modules and pre-installed zfs-utils in the Yunohost ISOs would simplifiy this a lot.
Since version 6.4 Proxmox Virtual Environment installs the Debian Buster root system on ZFS. I suggest to have a look into the PVE installer. ISO-Image
ZFS need ram to be happy a rule of thumb for ZFS, it's 1 GB per TB. When you have enough RAM, because of the cache system, your system will have Hight Speed IO.
The Snapshot capability of ZFS would be quite useful. LVM could also do snapshot, this is how I do hot backup of my laptop.
Question: But at the end, it would not be better to leave ZFS to proxmox and focus on being able to run yunohost under LXC ?
Question: But at the end, it would not be better to leave ZFS to proxmox and focus on being able to run yunohost under LXC ?
Not sure if you're implying it currently can't be ran into LXC, but it can :wink:
Not sure if you're implying it currently can't be ran into LXC, but it can wink
I know it could ;) just not every apps, such as ZeroNet
While I managed to run ZeroNet with a Tor inside another container which is also based on Debian 10
under yunohost this app is unable; it seams to be an issue with Tor which is unable to make a node.
@JOduMonT I had bad experiences with LXC failing after updating host OS or guest appliances because of different kernel versions. So I have stopped using LXC and use VMs only. The goal is to use Yunohost with ZFS on bare metal (e.g. RaspberryPi or amd64 with two SSDs).
Today the ext4 root-filesystem of my Yunohost installation was broken after I/O errors on Inodes without any obvious reason (Proxmox VM). The ext4 root-filesystem was mounted read-only and could not be mounted in a writable state until I repaired it with fsck. I would really appreciate a self-healing filesystem!
That's just because i have writing a tuto about adding external storage and ask me 1000 questions...
Use cases
Several yunohost users want to be able to :
All of these use cases are about disks management features. A lot of NAS system are very advanced on those questions. I think YunoHost could explore a bit this kind of features.
ZFS vs LVM+EXT4
I think we should configure ZFS filesystem by default if we can, indeed it's quite simple to create a ZFS pool in mirror mode with one disk. Adding mirrored disk next is quite easy.
In more, FS on top of ZFS pool didn't need sizing, so we can simply ask user on which path to mount the FS.
However, i think we can create an abstraction able to manage LVM/Ext4/
mount --bind
or ZFS. ABout ZFS we need to check impact on RAM.Auto mount
We can use udev to automount, for example with this kind of script: /etc/udev/rules.d/11-media-by-label-auto-mount.rules
Webadmin mockups
Note: this mockup is derivated from groups interface. Free space and properties, could probably be displayed in a better way.![storage mockup](https://user-images.githubusercontent.com/4080016/121244394-199ce500-c89f-11eb-935e-455e9d8cac47.png)
CLI draft
Create a storage pool with disk sdc and sdd and on which will bind /home/yunohost.app , /home/yunohost.multimedia and /var/mail
Add / remove disk to the storage pool. By default, storage pool are in mirror mode.
Add new path we want to be on those disks storage pool:
To move a path on another pool we need to add it on the other storage pool:
Get some info about a storage pool:
Snapshot a volume