Closed hugochinchilla closed 5 years ago
We have created an issue in Pivotal Tracker to manage this:
https://www.pivotaltracker.com/story/show/161254182
The labels on this github issue will be updated when the story is started.
may be related to kubernetes/kubernetes#66961
Ok, I think I've found the problem.
Kubelet is running with /var/lib/kubelet
as root dir, I think it should be using /var/vcap/data/kubelet
instead.
closing based on resolution of https://github.com/cloudfoundry-incubator/kubo-release/pull/259
We have created an issue in Pivotal Tracker to manage this:
https://www.pivotaltracker.com/story/show/162153525
The labels on this github issue will be updated when the story is started.
reopened as it is still under review
We have created an issue in Pivotal Tracker to manage this:
https://www.pivotaltracker.com/story/show/162153525
The labels on this github issue will be updated when the story is started.
my close/reopen created a duplicate tracker item, I have deleted 162153525, https://www.pivotaltracker.com/story/show/161254182 is the one to track
I would like this error message improved:
Jan 8 01:07:30 vinson kubelet[1514]: I0108 01:07:30.586617 1514 image_gc_manager.go:300] [imageGCManager]:
Disk usage on image filesystem is at 85% which is over the high threshold (85%). Trying to free 481610956 bytes down to the low threshold (80%).
It doesn't give device name or mount point so I can't figure out which filesystem it's complaining about (there is plenty of space on mounted volumes, so perhaps it's looking at the thinpool LVM which I'm using for image storage).
Also, the garbage-collector barfs when it encounters statically-launched images started by docker run rather than k8s.
@instantlinux That seems more within the purview of the Kubernetes community. Please raise an issue there.
@hugochinchilla This should be fixed in the default manifest as of CFCR v0.31.0 (Kubelet's root-dir
is set to /var/vcap/data/kubelet
)
Sure thing @tvs, thanks for reminding me. I've reported there as issue #75708.
Thank's for the update @tvs
What happened:
I'm having problems with my workers reporting the wrong ammount of disk capacity on the vm, I'm getting pod evictions while having a lot of free space on the ephemeral disk. The VM has two disks,
sda1
with the system install andsdb2
for ephemeral data.kubelet seems to detect the size of
sda1
as the ammount of space available for ephemeral storage.take the
3030944Ki
and convert it to bytes, you get3103686656
, searching for this number on the kubelet logs I can see it is the exact size of the system partition (sda1
), docker is running with--graph /var/vcap/data/docker/docker
which issdb2
notsda1
.Here is the output of
df
(redacted):And the relevant section from the kubelet log:
What you expected to happen:
kubelet to detect
sdb2
as the correct storage for ephemeral data.How to reproduce it (as minimally and precisely as possible):
Deploy cfcr on a vsphere cluster:
Get the description of a worker node with
kubectl describe node
, search forephemeral-storage
underCapacity
.Anything else we need to know?:
Environment:
bosh -d <deployment> deployment
):bosh -e <environment> environment
):kubectl version
):aws
,gcp
,vsphere
): vsphere