we will eventually enable memory limits for CI jobs. There is no current way to detect this in k8s/prometheus in our environment.
For example, I set KUBERNETES_MEMORY_LIMIT=1500M for this job, which was killed shortly after starting. There is no error reason in the log or in the exit code. See this opensearch query.
The kube_pod_container_status_last_terminated_exitcode metric is supposed to indicate an OOM kill for a job, but this isn't working.
I came across a blog post that describes the same issue and I've been corresponding with the author (@jimmy-outschool)
According to his info, k8s is looking for the primary process to exit due to OOM instead of the non-pid 1 process that is launched by the gitlab runner.
What would success / a fix look like?
His solution involves a small patch to gitlab runner, which looks for OOM events in the kernel message buffer and outputs the correct exit code to the log. He has attempted to upstream this to no avail.
While we may face headwinds when pushing to deploy a custom version of gitlab runners, the alternative solutions are not great:
using memory usage, we could see if the last reported number is within 90% of the limit to determine if it was killed. However, the spikes are so large that I've seen figures as low as 70% of limit before OOM killed.
Instead of integrating the code into the GitLab runner what if we wrapped the execution of Spack?
Thus the subprocess would be killed and the parent could then look up the kernel message
questions:
permissions to access kernel messages
does the main process have enough info to find subprocess messages?
Problem/Opportunity Statement
we will eventually enable memory limits for CI jobs. There is no current way to detect this in k8s/prometheus in our environment.
For example, I set
KUBERNETES_MEMORY_LIMIT=1500M
for this job, which was killed shortly after starting. There is no error reason in the log or in the exit code. See this opensearch query.The
kube_pod_container_status_last_terminated_exitcode
metric is supposed to indicate an OOM kill for a job, but this isn't working.relevant issues:
I came across a blog post that describes the same issue and I've been corresponding with the author (@jimmy-outschool)
According to his info, k8s is looking for the primary process to exit due to OOM instead of the non-pid 1 process that is launched by the gitlab runner.
What would success / a fix look like?
His solution involves a small patch to gitlab runner, which looks for OOM events in the kernel message buffer and outputs the correct exit code to the log. He has attempted to upstream this to no avail.
While we may face headwinds when pushing to deploy a custom version of gitlab runners, the alternative solutions are not great: