Closed hickersonj closed 9 months ago
lvmThinPools:
- name: lvm-thin
thinVolume: thinpool
volumeGroup: ""
devicePaths:
- /dev/loop500
That won't work, the operator is very picky in what devices it can "format". To work around this, instead of specifying devicePaths:
do the setup your self on each node:
pvcreate /dev/loop500
vgcreate vg1 /dev/loop500
lvcreate -l 100%FREE --thinpool vg1/thinpool
then use thinVolume: thinpool
and volumeGroup: vg1
and no devicePaths
.
Second issue: I think you are using a very old version of DRBD, which seems to cause errors in the monitoring image. If DRBD is installed on the host, make sure it is up to date (currently 9.1.6). If you used one of the init containers: make sure it didn't load DRBD 8.4 which is sometimes packaged on the host OS. Check cat /proc/drbd
to find out which DRBD version you are running
@WanzenBug this is very helpful!
I did get a bit further by creating the pv before seeing your comment. However, it looks like I'll need to upgrade somethings on the host per your instructions:
lvcreate -l 100%FREE --thinpool vg1/thinpool
modprobe: FATAL: Module dm-thin-pool not found in directory /lib/modules/5.10.99
/sbin/modprobe failed: 1
thin-pool: Required device-mapper target(s) not detected in your kernel.
Run `lvcreate --help' for more information.
I'll enable DM_THIN_PROVISIONING
and upgrade the drbd version.
cat /proc/drbd
version: 8.4.11 (api:1/proto:86-101)
I'll fix those items and see where I get.
@hickersonj did you managed to get it working?
Also I am curious if there is anyway to specify in the chart which devices are available on EACH node, since the nodes can be completely diferent from each other. I have machines with 2x1Tb disks, others with 1x512Gb nvme and others with 4Tb hdd disks.
Thanks in advance
@colegatron no I did not get it to work and gave up. It was a bit too complex for our use case.
Finally the best solution is to do not use devices in the Linstor configuration. For various reasons, in fact; It is not easy to handle in the chart (or standard config), the name of the devices can vary depending on the host OS.
I am used to have everything automated in three steps; 1)hardaware and OS provisioning, 2)kubernetes cluster provisioning and 3)kubernetes services and workloads provisioning. So I moved the preparation of the storage devices to the step 1 with ansible, where I know which kind of storage(s) device(s) I have on each node. There you set up the LVM Volume Groups. For example; I have servers with 1 root partition 1 storage partition on nvme device, and 2 extra disks in ssd disks and 2 extra disks in hdd disks. So I can create a VG for each type of storage type.
Then in Linstor helm you only need to specify the "volumeGroup"s, it will take each VG and configure it to be used in the storagePools you specify. Remember, though, the VGs should do not have any LV nor filesystems before you install Linstor.
Another thing I learnt: If you want to reuse storage devices with pvc/lv created by linstor, they will clash with the new volumes created and will not work. You need to stop them manually at drbd level first and then at lvm level.
The csi-nodes, ha-controller and csi-controller are unable to connect to the piraeus-op-cs service.
My helm command is:
I changed the storage-pools to thin provision because the thick kept throwing out an error about not finding the drbdpool:
I do not have entire disks to allocate to the operator, so I’m using loopback devices that are sitting on the disk:
The pod that is crashing is the piraeus-op-ns-node, and the drbd-prometheus-exporter container within it:
The pod doesn’t crash after I remove the satelliteSet.monitoringImage:
However, this only gets the csi/ha-controller pods to hang in the init phase: