Open abayer opened 2 years ago
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
with a justification.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen
with a justification.
/lifecycle stale
Send feedback to tektoncd/plumbing.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
with a justification.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen
with a justification.
/lifecycle rotten
Send feedback to tektoncd/plumbing.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
with a justification.
Mark the issue as fresh with /remove-lifecycle rotten
with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen
with a justification.
/close
Send feedback to tektoncd/plumbing.
@tekton-robot: Closing this issue.
/remove-lifecycle rotten
/lifecycle frozen
The ones I notice right now are the
plumbing-image-build
andpull-pipeline-kind-k8s-v1-21-e2e
PRPipelineRun
s, and thebuild-and-push-test-runner
cronjob triggeredPipelineRun
. I've seen thetest-runner
image builds cause OOMs on their nodes, and theplumbing-image-build
one I'm looking at right now is at over 5gb memory used. Thepull-pipeline-kind-k8s-v1-21-e2e
pods that I've seen have ranged between 2 and 4gb memory used.None of them (or any of the other Tekton
PipelineRun
s, for that matter) have anyrequests
orlimits
configured, so they can end up on the same node, or a node with one of the other high memory usage pods always running in the cluster (i.e., prometheus and kafka) and cause problems. Given that dogfooding is hardcoded to 5 n1-standard-4s, with ~13gb allocatable memory, it's pretty easy for just a few of the high memory pods to end up on the same node and swamp it.