jhunt / k8s-boshrelease

A BOSH Release for deploying Kubernetes clusters
MIT License
13 stars 9 forks source link

Add a force drain option to purge all pod inside a node #67

Open obeyler opened 4 years ago

obeyler commented 4 years ago

When update bosh deployment to rezize disk or update stemcell we need to be able to make a full drain automatically. we can be inspired by cfcr bosh release : https://github.com/cloudfoundry-incubator/kubo-release/blob/6f9046f29de4b9e3f089502df6887d5d058b4249/jobs/kubelet/templates/bin/drain.erb#L75 and check for attached PV / disk https://github.com/cloudfoundry-incubator/kubo-release/blob/6f9046f29de4b9e3f089502df6887d5d058b4249/jobs/kubelet/templates/bin/drain.erb#L97

obeyler commented 4 years ago

@jhunt is there a good raison to set root for containerd on /var/vcap/store ? and not on /var/vcap/data

https://github.com/jhunt/k8s-boshrelease/blob/4adef9fa622739fb819614650bfeb9a858ca2d8d/jobs/runtime-runc/templates/bin/containerd#L54 It seems to look the persistent disk and forbide the resize of persistent disk

jhunt commented 4 years ago

Probably just an oversight.

Out of curiosity, what are you storing on your persistent volume? Most of my kubelet nodes lack persistent volumes, since they have no data to persist.

obeyler commented 4 years ago

I use the persistant volume to store data of PV made by openebs (openebs.io). With this method I'm not dependant of any cloudprovider. It create dynamic LocalPV or Cstor storageclass. You can also use project like longhorn from rancher they also use a directory to store data. Instead of put it in /var/lib/longhorn you put it on /var/vcap/store/longhorn In your part how do you manage persistant volume in your K8S? Do you use cloud provider to create disk? I do that before with CFCR but I see that when I destroyed the cluster the disk created on iaas by provisionner wasn't deleted and so on as I frequently create/delete cluster I 've got as result a lot of pending disk on my iaas. They all have no specif name who let me find who belong to each cluster so I was unable to select witch belong to an actif K8S cluster an witch belong to a deleted cluster. This is why I decided to move to openebs as the disk is provide by the K8S node.