Closed mengyangGIT closed 5 years ago
When booting up k3s with default settings it logs Disabling CPU quotas due to missing cpu.cfs_period_us
.
Maybe it's related to this issue and helps.
I got into this while executing this https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/
This creates the pod but expected OOMKilled on the next section is not happening:
kubectl apply -f https://k8s.io/examples/pods/resource/memory-request-limit-2.yaml --namespace=mem-example
....
kubectl get pod memory-demo-2 --namespace=mem-example
@mengyangGIT I am not able to reproduce the issue with the latest k3s version, here are my steps:
apiVersion: v1
kind: Pod
metadata:
name: cpu-demo
namespace: cpu-example
spec:
containers:
- name: cpu-demo-ctr
image: vish/stress
resources:
limits:
cpu: "1"
requests:
cpu: "0.5"
args:
- -cpus
- "2"
kubectl top pods -n cpu-example
Result:
I can see that the limit is honored correctly:
✗ k top pods cpu-demo -n cpu-example
NAME CPU(cores) MEMORY(bytes)
cpu-demo 991m 1Mi
I also tried the example with exceeding the cpu limit:
apiVersion: v1
kind: Pod
metadata:
name: cpu-demo-2
namespace: cpu-example
spec:
containers:
- name: cpu-demo-ctr-2
image: vish/stress
resources:
limits:
cpu: "100"
requests:
cpu: "100"
args:
- -cpus
- "2"
which requests 100 cpu, and the pod didn't start as expected:
k describe pods/cpu-demo-2 -n cpu-example
.....
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 27s (x5 over 6m24s) default-scheduler 0/1 nodes are available: 1 Insufficient cpu.
@joaovitor I was able to reproduce this case, the OOMkiller doesn't seem to be invoked, however I noticed that the container is not exceeding the memory limit configured for the pod running the stress command:
k top pod -n mem-example
NAME CPU(cores) MEMORY(bytes)
memory-demo-2 18m 99Mi
bash-4.3# ps -o pid,user,rss,vsz,comm ax
PID USER RSS VSZ COMMAND
1 root 0 740 stress
6 root 83m 250m stress
7 root 196 6212 bash
15 root 4 1520 ps
root@pop-os:/sys/fs/cgroup/memory/kubepods/burstable# cat pod51a6febe-4d87-4c7c-beff-5ead07df2da5/cad6693e3974a359b4ea0ef193a4998bce376e45a1fdcacc671e9643bcab1096/memory.limit_in_bytes
104857600
root@pop-os:/sys/fs/cgroup/memory/kubepods/burstable# cat pod51a6febe-4d87-4c7c-beff-5ead07df2da5/cad6693e3974a359b4ea0ef193a4998bce376e45a1fdcacc671e9643bcab1096/memory.usage_in_bytes
103915520
I was able to see the OOMkiller being invoked in a rke cluster with the same yaml file
cc @erikwilson
@joaovitor The issue is happening because the swap is enabled on the system, If the swap is enabled then the OOMkiller will not be triggered until there is no memory left in the swap.
Closing as it is expected behavior when swap is enabled
Cpu limits not work because the flag hasCFS
checkCgroups return is false
I found that in kernel 3.10.0-x, the cpu subsystems in /proc/{pid}/cgroup
is cpuacct,cpu
while in /sys/fs/cgroup
it is cpu,cpuacct
(out of order) , which make k3s find wrong path of cpu.cfs_period_us
.
I think it's a bug of kernel 3.10, to solve this problem, you can create a link from cpuacct,cpu
to cpu,cpuacct
like below
sudo mount -o remount,rw '/sys/fs/cgroup'
sudo ln -s /sys/fs/cgroup/cpu,cpuacct /sys/fs/cgroup/cpuacct,cpu
sudo systemctl restart k3s
The fix is not working for me: k3d running k3s:
k3d cluster create 1-20 --image rancher/k3s:v1.20.5-rc1-k3s1
logs:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cad43f091333 rancher/k3s:v1.20.5-rc1-k3s1 "/bin/k3s server --t…" 28 minutes ago Up 28 minutes k3d-1-20-server-0
docker logs cad43f091333
...
time="2021-03-27T20:56:01.397056042Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
...
docker exec cad43f091333 k3s --version
k3s version v1.20.5-rc1+k3s1 (355fff30)
go version go1.15.10
OS: ubuntu 20.04
@pkoltermann can you confirm that it does not work when run outside of docker? I suspect docker may not be presenting all the correct cgroups to enable nested resource limits.
@brandond You are right, if I run it on the host machine it works. The question is how to make it work in docker?
I would probably take this question to the k3d issue tracker.
@galal-hussein I can only confirm that after disabling swap (it gets re-enabled after reboot in my case) and restarting k3s service the memory limits started working as expected. On Ubuntu I did the following.
Turn off all swaps
swapoff –a
Restart k3
systemctl restart k3s.service
After that the memory 'thirsty' pod was being restarted each time it reached the memory limit.
Describe the bug
Expected behavior excepted : cpu 20% but whatever i set to the limit ,cpu percent is always 100%.
Additional context
OS: centos 7
kernel ver: 3.10.0-957.10.1.el7.x86_64 k3s ver: 0.4