Open cgruver opened 2 years ago
In principle, it'd be good to support this. There hasn't been much demand for it, though, since LVM is mostly useful for reconfiguring storage on already-running systems, and on Ignition-based systems you should probably just reprovision the node instead.
You can accomplish something similar with RAID linear mode. Your example corresponds to something like this:
variant: fcos
version: 1.4.0
storage:
disks:
# assuming /dev/vda is the install disk
- device: /dev/vda
partitions:
# replace existing root partition with swap
- label: swap
number: 4
wipe_partition_entry: true
size_mib: 16384
- label: root1
- device: /dev/vdb
wipe_table: true
partitions:
- label: root2
raid:
- name: md-root
level: linear
devices:
- /dev/disk/by-partlabel/root1
- /dev/disk/by-partlabel/root2
filesystems:
- device: /dev/disk/by-partlabel/swap
format: swap
wipe_filesystem: true
# enable swap automatically
with_mount_unit: true
- device: /dev/md/md-root
label: root
format: xfs
wipe_filesystem: true
However, there's currently a bug in the first-boot provisioning code (https://github.com/coreos/coreos-installer/pull/696) so level: linear
won't work. level: raid0
is a reasonable workaround on FCOS 35+, but won't work on FCOS 34. Fedora 34's mdadm doesn't include this commit, which is needed when creating RAID 0 volumes with non-uniform component sizes on modern kernels.
Thanks @bgilbert
My use case is a home lab, bare metal install of OpenShift. Some of the NUCs that I have have 2 m.2 SATA III slots.
So, my intent is to use those as storage nodes and slice off a bit of the total for FCOS and leave the rest as one or more block devices for Ceph.
I'll try the RAID route.
Actually, even while I'm typing... I think that the NUCs might even have an onboard RAID support... Duh... ;-) That would make more sense.
We can leave this open to see if there is any production interest in LVM support, or close it as an edge case.
Cheers.
However, there's currently a bug in the first-boot provisioning code (coreos/coreos-installer#696) so
level: linear
won't work.level: raid0
is a reasonable workaround on FCOS 35+, but won't work on FCOS 34.
Another workaround I think is to specify an explicit UUID via options: ["--uuid", "$MY_UUID"]
in the raid
section of the Butane config and then appending the rd.md.uuid=$MY_UUID
karg via the kernel_arguments
section.
I think that the NUCs might even have an onboard RAID support... Duh... ;-) That would make more sense.
Note that it might be FakeRAID (i.e. software RAID that's also supported by the firmware during boot), which wouldn't help you here.
Let's leave this issue open. LVM support is probably worth considering in the long term.
@bgilbert you are correct. It creates a device under /dev/md...
I'm going to tinker with the RAID in ignition approach.
Has there been any update to this or any potential workarounds that can accomplish something similar? My use case is a K8s cluster on Fedora CoreOS nodes and I'd like to be able to provision LVM for Rook Ceph using ignition.
@djds You can likely reverse engineer my lab deployment scripts to get the ignition config modifications that I use.
The script that creates the machine specific ignition files is here: https://github.com/cgruver/kamarotos/blob/main/bin/clusterButaneConfig.sh
I'm not using LVM, but directly modifying the partition table via ignition.
You will need the butane
CLI. https://github.com/coreos/butane
I'd like to second the request for LVM integration in Ignition. It would be really helpful for bare metal deployments
+1 for this
For provisioning OpenShift worker nodes on bare-metal that will also host Ceph storage, it would be nice to be able to configure Logical Volume Management via Ignition.
This would be in support of servers with multiple SSDs installed.
The implementation might be able to borrow from the methods used by Anaconda Kickstart:
For example: