YunoHost / issues

General issue tracker for the YunoHost project
71 stars 8 forks source link

Concept/draft for managing several disks storage #1823

Open zamentur opened 3 years ago

zamentur commented 3 years ago

That's just because i have writing a tuto about adding external storage and ask me 1000 questions...

Use cases

Several yunohost users want to be able to :

All of these use cases are about disks management features. A lot of NAS system are very advanced on those questions. I think YunoHost could explore a bit this kind of features.

ZFS vs LVM+EXT4

I think we should configure ZFS filesystem by default if we can, indeed it's quite simple to create a ZFS pool in mirror mode with one disk. Adding mirrored disk next is quite easy.

In more, FS on top of ZFS pool didn't need sizing, so we can simply ask user on which path to mount the FS.

However, i think we can create an abstraction able to manage LVM/Ext4/mount --bind or ZFS. ABout ZFS we need to check impact on RAM.

Auto mount

We can use udev to automount, for example with this kind of script: /etc/udev/rules.d/11-media-by-label-auto-mount.rules

KERNEL!="sd[a-z][0-9]", GOTO="media_by_label_auto_mount_end"
# Import FS infos
IMPORT{program}="/sbin/blkid -o udev -p %N"
# Get a label if present, otherwise specify one
ENV{ID_FS_LABEL}!="", ENV{dir_name}="%E{ID_FS_LABEL}"
ENV{ID_FS_LABEL}=="", ENV{dir_name}="usbhd-%k"
# Global mount options
ACTION=="add", ENV{mount_options}="relatime"
# Filesystem-specific mount options
ACTION=="add", ENV{ID_FS_TYPE}=="vfat|ntfs", ENV{mount_options}="$env{mount_options},utf8,uid=1007,gid=1007,umask=002"
# Mount the device
ACTION=="add", RUN+="/bin/mkdir -p /media/%E{dir_name}", RUN+="/bin/mount -o $env{mount_options} /dev/%k /media/%E{dir_name}"
# Clean up after removal
ACTION=="remove", ENV{dir_name}!="", RUN+="/bin/umount -l /media/%E{dir_name}", RUN+="/bin/rmdir /media/%E{dir_name}"
# Exit
LABEL="media_by_label_auto_mount_end"

Webadmin mockups

Note: this mockup is derivated from groups interface. Free space and properties, could probably be displayed in a better way. storage mockup

CLI draft

Create a storage pool with disk sdc and sdd and on which will bind /home/yunohost.app , /home/yunohost.multimedia and /var/mail

$ ynh storage create data --disks sdc sdd --paths /home/yunohost.app /home/yunohost.multimedia /var/mail

Add / remove disk to the storage pool. By default, storage pool are in mirror mode.

$ ynh storage add data --disks sdg 
Syncing data storage pool. You can check sync status with `ynh storage info data`

$ ynh storage remove data --disks sdg 

Add new path we want to be on those disks storage pool:

$ ynh storage add data --paths /home
This will stopped some services in order to migrate simply your data (yN) ? y

To move a path on another pool we need to add it on the other storage pool:

$ ynh storage add system --paths /home

Get some info about a storage pool:

$ ynh storage info data --full
data:
    disks: 
      - sdc
      - sdd
    spare: 
      - sde
    paths: 
      - /home/yunohost.app
      - /home/yunohost.multimedia
      - /var/mail
    properties:
      size: 850GB
      free: 100GB
      type: zpool
      synchronized: 100%
      encrypted: no
      redundancy: mirror
      io_write: 50MB/s

Snapshot a volume

$ ynh storage snapshot create data
$ ynh storage snapshot list data
$ ynh storage snapshot remove data
renne commented 3 years ago

I agree with the flexibility and reliability of the self-healing ZFS. It also allows to compress, encrypt and/or de-duplicate storage. The snapshot capability simplifies incremental backups.

renne commented 3 years ago

In fact I was planning on running Yunohost on a RaspberryPi 4 8 GB RAM with a Dual-SATA-USB adapter and two SSDs. When I realized Yunohost doesn't support Raid1/Mirroring I installed Yunohost in a ZVOL on my Proxmox Virtual Environment host. Automatic backup and compression of a 1 TB Ext4 virtual disk with only 1.4 GB data is a pain in the ass. The Snapshot capability of ZFS would be quite useful.

Debian Buster Root on ZFS Debian 10 Root on Encrypted ZFS

Including pre-compiled ZFS modules and pre-installed zfs-utils in the Yunohost ISOs would simplifiy this a lot.

renne commented 3 years ago

Since version 6.4 Proxmox Virtual Environment installs the Debian Buster root system on ZFS. I suggest to have a look into the PVE installer. ISO-Image

JOduMonT commented 3 years ago

ZFS need ram to be happy a rule of thumb for ZFS, it's 1 GB per TB. When you have enough RAM, because of the cache system, your system will have Hight Speed IO.

The Snapshot capability of ZFS would be quite useful. LVM could also do snapshot, this is how I do hot backup of my laptop.

Question: But at the end, it would not be better to leave ZFS to proxmox and focus on being able to run yunohost under LXC ?

alexAubin commented 3 years ago

Question: But at the end, it would not be better to leave ZFS to proxmox and focus on being able to run yunohost under LXC ?

Not sure if you're implying it currently can't be ran into LXC, but it can :wink:

JOduMonT commented 3 years ago

Not sure if you're implying it currently can't be ran into LXC, but it can wink

I know it could ;) just not every apps, such as ZeroNet While I managed to run ZeroNet with a Tor inside another container which is also based on Debian 10 under yunohost this app is unable; it seams to be an issue with Tor which is unable to make a node. image

renne commented 3 years ago

@JOduMonT I had bad experiences with LXC failing after updating host OS or guest appliances because of different kernel versions. So I have stopped using LXC and use VMs only. The goal is to use Yunohost with ZFS on bare metal (e.g. RaspberryPi or amd64 with two SSDs).

renne commented 2 years ago

Today the ext4 root-filesystem of my Yunohost installation was broken after I/O errors on Inodes without any obvious reason (Proxmox VM). The ext4 root-filesystem was mounted read-only and could not be mounted in a writable state until I repaired it with fsck. I would really appreciate a self-healing filesystem!