kubernetes-sigs / prow

Prow is a Kubernetes based CI/CD system developed to serve the Kubernetes community. This repository contains Prow source code and Hugo sources for Prow documentation site.
https://docs.prow.k8s.io
Apache License 2.0
129 stars 99 forks source link

Crier failing to report OOMed pods #210

Open BenTheElder opened 3 months ago

BenTheElder commented 3 months ago

In prow.k8s.io we're seeing some pods fail to report status beyond "running" that have really been OOMed (if we manually inspect via the build cluster)

See: https://kubernetes.slack.com/archives/C7J9RP96G/p1721926824700569 and other #testing-ops therads

/kind bug

jihoon-seo commented 3 months ago

I guess that you mean crier.. 😊 /retitle Crier failing to report OOMed pods

dims commented 3 months ago

😢

petr-muller commented 3 months ago

I don't think this is crier - that only comes later. I think this is podutils interacting with https://github.com/kubernetes/kubernetes/pull/117793 .

NAME                                   READY   STATUS      RESTARTS   AGE
3b953040-572a-400f-8024-02310b8e54f8   1/2     OOMKilled   0          93m

The above (from the linked Slack thread) indicates one of the two containers in the Pod is still alive - sidecar. That container waits for entrypoint wrapper to write how the actual test process (entrypoints subprocess) ended, but with https://github.com/kubernetes/kubernetes/pull/117793 all processes in the containers get OOMkilled, including entrypoint. In the past only the test process would be killed so entrypoint would be able to collect its exit code and exit normally.

Not sure about what would be the solution yet.

BenTheElder commented 3 months ago

IMHO crier should still eventually report what happened here because it is observing the pod success / fail from the outside, but we see pods being perma reported as running (and if they really are running we should be enforcing a timeout via plank?)

We could either hand off more of this to crier / plank to observe the test container failure, or we could introduce additional heart beating / signaling between the entry point and sidecar?

petr-muller commented 3 months ago

True; you are right that Plank should eventually reap a pod stuck like this - if that does not happen then it's perhaps a separate bug, not sure if plank just uses too soft signals (sidecar iirc ignores some signals in certain state when it thinks it still needs to finish uploading artifacts) or it somehow fails to enforce a timeout entirely.

BenTheElder commented 3 months ago

Left a breadcrumb on https://github.com/kubernetes/kubernetes/pull/117793 because I think it's worth noting when the project breaks itself with breaking changes :+)

This problem is probably going to get worse as cgroup v2 rolls out to the rest of our clusters and more widely in the ecosystem.

BenTheElder commented 3 months ago

https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-scale-amazonvpc-using-cl2/1818540717593595904

https://gcsweb.k8s.io/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-kops-aws-scale-amazonvpc-using-cl2/1818540717593595904/

We actually appear to have uploaded all the artifacts ?? But we're just not recording a finished.json or the test log

BenTheElder commented 2 months ago

I'm seeing a lack of metadata (like the pod json) on pods that received SIGTERM and probably(?) weren't OOMed, at least there's no indication they would be, monitoring suggests they're actually not using much of the memory requested/limited.

https://prow.k8s.io/job-history/gs/kubernetes-jenkins/logs/ci-etcd-robustness-arm64

https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-etcd-robustness-arm64/1829281539263827968

BenTheElder commented 2 months ago

In that case we have some other weirdness: https://kubernetes.slack.com/archives/CCK68P2Q2/p1724976972581789?thread_ts=1724975407.113939&cid=CCK68P2Q2

oliver-goetz commented 1 month ago

True; you are right that Plank should eventually reap a pod stuck like this - if that does not happen then it's perhaps a separate bug, not sure if plank just uses too soft signals (sidecar iirc ignores some signals in certain state when it thinks it still needs to finish uploading artifacts) or it somehow fails to enforce a timeout entirely.

Afaik when a container is OOMKilled it exits with code 137. However, there is no signal sent to other containers of the same pod. Thus, we would need a kind of heart-beat between entrypoint and sidecar container if we would like to implement it there.

A potential plank implementation could check the Pod status for OOMKilled containers. It already checks for things like deleted pods, so it does not sound weird to implement the OOMKilled checker too. This could be done here and looks not that complicated. If it finds an OOMKilled container plank could set the prow job status to failed and delete the pod. The sidecar would have still time to collect and upload artifacts.

I prefer the second option. WDYT?

BenTheElder commented 1 month ago

That sounds reasonable but I haven't been in the prow internals for some time now, @petr-muller @krzyzacy might have thoughts 🙏