galexrt / k8s-vagrant-multi-node

A Kubernetes Vagrant Multi node environment using kubeadm.
https://k8s-vagrant-multi-node.galexrt.moe/
Apache License 2.0
166 stars 81 forks source link

Automatically mount VirtIO disks to specific destination #97

Open archenroot opened 3 years ago

archenroot commented 3 years ago

Feature Request So with these envars: DISK_COUNT ?= 2 DISK_SIZE_GB ?= 50

I get state where each machine in libvirt has 2 VirtIO disks, but one has size the other one doesn't: First has size: image

Second doesn't have size (that is strange): image

But when I create a file system it has the correct size.

Another small issue is that on master node there are only visible: /dev/vda /dev/vdb But on worker nodes I can see: /dev/vda /dev/vda1 /dev/vdb /dev/vdb1

Seems like something related to disc recognition works in different ways on master vs workers...

it will be good the script somehow automounts these drives to the specific mount point in Vagrantfile_scripts: $prepareScript

I understand that its not super easy as soon as:

  1. Visibility of disks is different per provider (vb vs libvirt) and eventually also under different OS (but maybe I am wrong here with OS)
  2. As this is based on DISK_COUNT variable, the possible new variable DISK_MOUNT_POINTS would need to be kind of array, eg. /var/lib/docker|/var/lib/other as 1 dim array, if we would like to specify mount point per disk count and not as sequence, we would need 2 dim array which in bash is simple like this (its actually matching strings..:-) ):
    
    # Make myarray an associative array
    declare -A DISKS

Assign some random value

  1. first disk disk count DISKS[0,1]="0" disk mount point DISKS[0,2]="/var/lib/docker" size DISKS[0,3]="50GB" fs DISKS[0,4]="ext4"

  2. second disk disk count DISKS[1,1]="1" disk mount point DISKS[1,2]="/mnt/disk2" size DISKS[1,3]="50GB" fs DISKS[1,4]="ext4"

Access it through variables

get disk 1 size x=1 y=3 echo "disk 2 size: ${DISKS[$x,$y]}"


i know it's not nice, just quickly kicked how its possible to achieve #66 
Any hints on this?

NOTE: After I deployed apache pulsar I am getting disk pressure taints as nodes running out of space...

**Are there any similar features already existing:**

**What should the feature do:**
Enable user to make bigger root fs or let him mount bigger disks to specific destinations, eg eventually /var will help.

**What would be solved through this feature:**
kubernetes disk pressure - I just deployed minio s3 and apache pulsar operators and all the images pulled in filled out the storage.

**Does this have an impact on existing features:**
archenroot commented 3 years ago

@galexrt - you were already brainstorming this feature here: https://github.com/galexrt/k8s-vagrant-multi-node/issues/80