There seems to be an issue with how nodes_usage() gather the resource usage on each worker node when Kubelet is configured to reserve CPU and RAM for system and Kube services even if BOREAS_SCHEDULER_RESERVED_KUBLET_CPU and BOREAS_SCHEDULER_RESERVED_KUBLET_RAM are set to 0.
For instance, a node with the following Kubelet configuration will be listed with 1800m CPU and 3589 MB RAM allocatable by kubectl describe node when its capacity is 2 CPUs and 4094 MB RAM:
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
systemReserved:
cpu: 100m
memory: 350M
kubeReserved:
cpu: 100m
memory: 50M
enforceNodeAllocatable:
- pods
Boreas currently lists its allocatable values at 1600m and 3487 RAM. Only 100m and 50Mi (52 MB) are requested by its kube-flannel pod, so there seems to be an additional 100m and 155 MB «missing».
There seems to be an issue with how
nodes_usage()
gather the resource usage on each worker node when Kubelet is configured to reserve CPU and RAM for system and Kube services even ifBOREAS_SCHEDULER_RESERVED_KUBLET_CPU
andBOREAS_SCHEDULER_RESERVED_KUBLET_RAM
are set to 0.For instance, a node with the following Kubelet configuration will be listed with 1800m CPU and 3589 MB RAM allocatable by
kubectl describe node
when its capacity is 2 CPUs and 4094 MB RAM:Boreas currently lists its allocatable values at 1600m and 3487 RAM. Only 100m and 50Mi (52 MB) are requested by its kube-flannel pod, so there seems to be an additional 100m and 155 MB «missing».