Closed Warxcell closed 1 year ago
Additional info:
дек 06 14:58:14 acerpredator kernel: php invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=992
дек 06 14:58:14 acerpredator kernel: oom_kill_process.cold+0xb/0x10
дек 06 14:58:14 acerpredator kernel: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
дек 06 14:58:14 acerpredator kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=docker-45c2db11f60c765fda1671c78dc8d827c182da64bac6ccf838cf9b0018e323bb.scope,mems_allowed=0,oom_memcg=/system.slice/docker-6d047dd8bb660429d8736ed7ae76464b39ba69c105bafd5796f3ffd5ab1adb73.scope/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91cead63_a202_475f_9a26_97c8ab358d71.slice,task_memcg=/system.slice/docker-6d047dd8bb660429d8736ed7ae76464b39ba69c105bafd5796f3ffd5ab1adb73.scope/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91cead63_a202_475f_9a26_97c8ab358d71.slice/docker-45c2db11f60c765fda1671c78dc8d827c182da64bac6ccf838cf9b0018e323bb.scope,task=php,pid=636206,uid=1000
дек 06 14:58:14 acerpredator kernel: Memory cgroup out of memory: Killed process 636206 (php) total-vm:512864kB, anon-rss:203564kB, file-rss:1004kB, shmem-rss:56812kB, UID:1000 pgtables:664kB oom_score_adj:992
дек 06 14:58:14 acerpredator systemd[1]: docker-6d047dd8bb660429d8736ed7ae76464b39ba69c105bafd5796f3ffd5ab1adb73.scope: Failed with result 'oom-kill'.
So this is occasional? Sometimes it works fine and others it fails to start?
Happened 6 times so far, mainly during I was AFK. Increased the memory of one of containers and so far (2 days) - its ok.
Based on your provided logs Memory cgroup out of memory: Killed process
is does look like it's running out of memory.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
Based on your provided logs
Memory cgroup out of memory: Killed process
is does look like it's running out of memory.
yes, but one specific container is running out of memory and whole minikube crashes
What Happened?
Minikube crashes randomly and then fail to start:
Attach the log file
log.txt
Operating System
Other
Driver
Docker