TheNewNormal / kube-cluster-osx

Local development multi node Kubernetes Cluster for macOS made very simple
Apache License 2.0
298 stars 26 forks source link

master node root file system full causing creation failure #87

Closed cgswong closed 8 years ago

cgswong commented 8 years ago

Each attempt to initialize the cluster results in the below error on the master node:

Installing into k8smaster-01...
2016-07-20 22:37:16.686969 I | uploading 'kube.tgz' to 'k8smaster-01:/home/core/kube.tgz'
89.79 MB / 89.79 MB [===========================================================================================] 100.00 %
tar: ./kubelet: Wrote only 2048 of 10240 bytes
tar: Exiting with failure status due to previous errors
[ERROR] Process exited with status 2
Done with k8smaster-01

Upon checking the master node the root file system is full:

core@k8smaster-01 ~ $ df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        371M     0  371M   0% /dev
tmpfs           499M     0  499M   0% /dev/shm
tmpfs           499M   13M  486M   3% /run
tmpfs           499M     0  499M   0% /sys/fs/cgroup
tmpfs           499M  499M     0 100% /
/dev/loop0      226M  226M     0 100% /usr
tmpfs           499M     0  499M   0% /media
tmpfs           499M     0  499M   0% /tmp
tmpfs           100M     0  100M   0% /run/user/500

This only occurs on the master node at the worker nodes are fine. I can't seem to track down the difference to fix this and was hoping someone had some thoughts as to the cause and a fix please.

rimusz commented 8 years ago

@cgswong root disk read only, the data.img gets mounted to persist various folders. And as I can see that data.img is not mounted in your case for some reason

rimusz commented 8 years ago

there should be /dev/vda 1.9G 560M 1.3G 32% /data as per my setup:

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        371M     0  371M   0% /dev
tmpfs           499M     0  499M   0% /dev/shm
tmpfs           499M   13M  486M   3% /run
tmpfs           499M     0  499M   0% /sys/fs/cgroup
tmpfs           499M   33M  466M   7% /
/dev/loop0      226M  226M     0 100% /usr
tmpfs           499M     0  499M   0% /media
tmpfs           499M     0  499M   0% /tmp
/dev/vda        1.9G  560M  1.3G  32% /data
tmpfs           100M     0  100M   0% /run/user/500
rimusz commented 8 years ago

@cgswong please try the lasted app version v0.4.5, it has sparse disks support now. If the problem still exists, open the issue again