Open EugenMayer opened 1 day ago
AFAICS it seems like it is killed by the OOM killer
[1034895.974251] Memory cgroup out of memory: OOM victim 3085690 (httpd) is already exiting. Skip killing the task
[1034895.974253] Memory cgroup out of memory: Killed process 3085741 (httpd) total-vm:312132kB, anon-rss:9032kB, file-rss:4116kB, shmem-rss:76kB, UID:1001 pgtables:168kB oom_score_adj:993
I found an entry for each time the pod has been restart.
It seems like the limit / resources are not properly applied? i used
resources:
limits:
cpu: 500m
ephemeral-storage: 2Gi
memory: 512Mi
requests:
cpu: 400m
ephemeral-storage: 100Mi
memory: 512Mi
to rise it to 512MB, but it is killed still on 310Mi (which is the default)?
We have more then enough ram on the nodes, so it is a virtual limit that applies
Checked the deployment (i upped it to 1024Mi) and i found, for containers
resources:
limits:
cpu: 500m
ephemeral-storage: 2Gi
memory: 1Gi
requests:
cpu: 500m
ephemeral-storage: 100Mi
memory: 1Gi
So the limits have been applied to the deployment and also on the pod
resources:
limits:
cpu: 500m
ephemeral-storage: 2Gi
memory: 1Gi
requests:
cpu: 500m
ephemeral-storage: 100Mi
memory: 1Gi
But still the oomkiller kills at 310MB
[Thu Nov 7 18:52:33 2024] Memory cgroup out of memory: OOM victim 3251759 (httpd) is already exiting. Skip killing the task
[Thu Nov 7 18:52:33 2024] Memory cgroup out of memory: Killed process 3251760 (httpd) total-vm:350456kB, anon-rss:47916kB, file-rss:12668kB, shmem-rss:74452kB, UID:1001 pgtables:448kB oom_score_adj:-997
[Thu Nov 7 18:52:33 2024] Memory cgroup out of memory: Killed process 3251761 (httpd) total-vm:350200kB, anon-rss:47884kB, file-rss:12668kB, shmem-rss:74452kB, UID:1001 pgtables:444kB oom_score_adj:-997
[Thu Nov 7 18:52:33 2024] Memory cgroup out of memory: Killed process 3251763 (httpd) total-vm:346424kB, anon-rss:43728kB, file-rss:12732kB, shmem-rss:72980kB, UID:1001 pgtables:436kB oom_score_adj:-997
[Thu Nov 7 18:52:33 2024] Memory cgroup out of memory: Killed process 3251764 (httpd) total-vm:402344kB, anon-rss:25412kB, file-rss:15584kB, shmem-rss:67992kB, UID:1001 pgtables:404kB oom_score_adj:-997
[Thu Nov 7 18:52:33 2024] Memory cgroup out of memory: Killed process 3251786 (httpd) total-vm:350200kB, anon-rss:46616kB, file-rss:12732kB, shmem-rss:73108kB, UID:1001 pgtables:440kB oom_score_adj:-997
[Thu Nov 7 18:52:33 2024] Memory cgroup out of memory: Killed process 3251789 (httpd) total-vm:329976kB, anon-rss:27056kB, file-rss:12576kB, shmem-rss:69588kB, UID:1001 pgtables:400kB oom_score_adj:-997
[Thu Nov 7 18:52:33 2024] Memory cgroup out of memory: Killed process 3251792 (httpd) total-vm:324724kB, anon-rss:22144kB, file-rss:12384kB, shmem-rss:59540kB, UID:1001 pgtables:364kB oom_score_adj:-997
[Thu Nov 7 18:52:33 2024] Memory cgroup out of memory: Killed process 3251795 (httpd) total-vm:312132kB, anon-rss:9164kB, file-rss:4176kB, shmem-rss:76kB, UID:1001 pgtables:176kB oom_score_adj:-997
Hi,
Could you confirm that when you run kubectl describe pod <wordpress_pod>
you can see the established limits? If so, then it seems to me there might be an issue with the Kubernetes cluster, maybe a general virtual limit (not sure how this is achieved, maybe in kubelet?)? In this case, I would check with the cluster administrator.
Name and Version
bitnami/wordpress 23.1.28
What architecture are you using?
amd64
What steps will reproduce the bug?
Are you using any custom parameters or values?
nothing fancy:
What is the expected behavior?
The pod should not restart
What do you see instead?
pod does restart
Additional information
I seem not to be able to see anything of help when the restart happens. All i see is a couple of requests and then suddently the pod restarts:
Is there any way to find out why the pod is restarted in the first place?
The events do not tell anything specific either: