actions / actions-runner-controller

Kubernetes controller for GitHub Actions self-hosted runners
Apache License 2.0
4.4k stars 1.04k forks source link

"Job is waiting for a runner from XXX to come online" with 0.9.0 and 0.9.1 (works with 0.8.3) #3499

Closed alecor191 closed 1 month ago

alecor191 commented 1 month ago

Checks

Controller Version

0.9.0 and 0.9.1

Deployment Method

Helm

Checks

To Reproduce

1. We've set up an AKS cluster with ACR 0.9.0 and one with ACR 0.9.1
2. schedule a couple dozen workflows runs that will trigger cluster scale out (and back in)

Describe the bug

Describe the expected behavior

No scheduled GHA run gets stuck waiting for a runner. All jobs eventually get a runner assigned.

Additional Context

Controller Logs

Search for // ISSUE in the following controller logs to find the time where we observed the issue.

Link to Gist

Esp. in file manager-medium.log searching for // ISSUE will show that there were no logs for minutes. Only when we killed the listener, then new logs started to show up indicating that more runners were needed.

Runner Pod Logs

N/A

jonwest commented 1 month ago

Same behaviour here as well. Appreciate you posting the workaround(s)! Killing the listener pod got things online, but I will revert to 0.8.3 in order to have some semblance of reliability in the interim. 🙏

thiago-juro commented 1 month ago

We're facing the same issue. Reverted to 0.8.3 as well.

cheskayang commented 1 month ago

having the same issue with 0.9.0 and 0.9.1, added some observations in this ticket https://github.com/actions/actions-runner-controller/issues/3501

thx for the tip! will revert to 0.8.3 for now

Mahmoudkassry commented 1 month ago

Same issue, reverting to 0.8.3. Thanks for the tip!

alecor191 commented 1 month ago

Tagging @nikola-jokic who was active in #3420: Would you be able to help with this issue?

jonwest commented 1 month ago

For those of you that are experiencing this issue—are you using ArgoCD, by chance? Just trying to see if there is any other correlations between my setup and others that are experiencing issues. I've noticed that it seems like the listeners are needing to be restarted around the time of a sync with Argo, and I'm curious if that has an effect.

Tycale commented 1 month ago

I have the same issue. I am using FluxCD. I will try to use the 0.8.2 version to see if it helps.

EDIT: I opted for version 0.8.3, and it's performing beautifully! To downgrade, I removed all the namespaces, CRDs, roles, role bindings, service accounts, deployments, helm charts, etc. Omitting this step appears to cause problems with the listener not starting up.

Mahmoudkassry commented 1 month ago

Appears to be happening with 0.8.3 as well now.

jnogol commented 1 month ago

happening here as well, when inspecting the ephemeral runner, we're getting Failed to create the pod: pods "<my-runner-name>-hr7bs-runner-rjlp9" is forbidden: exceeded quota: <myNS>-ghr-resource-quota, requested: limits.memory=8Gi, used: limits.memory=16Gi, limited: limits.memory=16Gi but, interestingly, there are no pods running in that namespace, so I'm not sure what is using up the quota.

mpjarvis commented 1 month ago

I'm able to reproduce the problem pretty consistently in 0.9.0 and 0.9.1 by kicking off around a dozen jobs simultaneously. With this many pods spinning up at the same time, it takes longer for them to all initialize, which seems to trigger the bug.

A little less than a minute after EphemeralRunnerSet started scaling up, I see logs like this:

May 13 13:11:50 arc-gha-rs-controller-6bb495b57-l8hhf manager INFO EphemeralRunnerSet   Ephemeral runner counts {"ephemeralrunnerset": {"name":"my-runner-zdldx","namespace":"my-runners"}, "pending": 5, "running": 7, "finished": 0, "failed": 0, "deleting": 0}
May 13 13:11:50 arc-gha-rs-controller-6bb495b57-l8hhf manager INFO EphemeralRunnerSet   Scaling comparison  {"ephemeralrunnerset": {"name":"my-runner-zdldx","namespace":"my-runners"}, "current": 12, "desired": 0}
May 13 13:11:50 arc-gha-rs-controller-6bb495b57-l8hhf manager INFO EphemeralRunnerSet   Deleting ephemeral runners (scale down) {"ephemeralrunnerset": {"name":"my-runner-zdldx","namespace":"my-runners"}, "count": 12}

All the runners get killed and never report any status back to GitHub. Note, I'm using containerMode.type=dind in my gha-runner-scale-set, so each runner needs to wait for dind to initialize before jobs will start executing.

Rolling back to 0.8.3 seems to fix the problem for me.

Update... After doing a bit more searching, the problem I'm seeing sounds exactly like #3450.

krupakar1329 commented 1 month ago

i am facing this issue on 0.7.0 version. whats the solution ? INFO EphemeralRunner EphemeralRunner has failed more than 5 times. Marking it as failed {"ephemeralrunner": {"name":"pd-cluster-xxxx-runner-9nk4l","namespace":"github-arc-runner-xxxx"}}

nikola-jokic commented 1 month ago

Hey @alecor191,

Could you please let us know if the issue is resolved in 0.9.2 version?

alecor191 commented 1 month ago

Hi @nikola-jokic. Sure, happy to. I just upgraded to 0.9.2 and will share results in ~1 day.

Update 2024-05-21: No issues after running it for 12+ hours. However, then we started observing stuck pipelines. However, this could very well be due to an ongoing GHA incident.

Update 2024-05-22: No issues since the GHA incident mentioned above was resolved. IOW 0.9.2 has been working fine for us for 48h (with 0.9.0/1 we ran into the issue within minutes/few hours).

CarlosHenriqueDamasceno commented 1 month ago

I'm facing the same issue, previously using 0.5.1, updated to 0.8.3 as suggested but the problem remains, so updated for 0.9.2 and nothing seems to change.

shanehull commented 1 month ago

0.9.2 is working for us so far.

zdenekpizl-foxdeli commented 1 month ago

0.9.2 suffers from the same issues. Either the job is waiting for a runner and it never starts although the pod is up, running and job is assigned or it, time by time, timeouts in the middle. Very dissatisfying ...

In fact ARC is not usable from 0.9.0 for us apparently.

zdenekpizl-foxdeli commented 1 month ago

to put my 2 cents:

**** Web UI Output of the job:
Requested runner group: Foxdeli-runners-dind
Job defined at: redacted
Reusable workflow chain: redacted -> redacted, not relevant
Waiting for a runner to pick up this job...
Job is waiting for a runner from 'dind-tools-prod-europe-west3' to come online.
Job is about to start running on the runner: dind-tools-prod-europe-west3-gc62k-runner-6gp7f (organization)

**** and the POD in arc-runners ns reports:
16:26 $ k logs pods/dind-tools-prod-europe-west3-gc62k-runner-6gp7f
Defaulted container "runner" out of: runner, dind, init-dind-externals (init)

√ Connected to GitHub

Current runner version: '2.316.1'
2024-05-21 14:24:20Z: Listening for Jobs
2024-05-21 14:26:10Z: Running job: PR / Run integration tests
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ it "runs" a job but it has never started

**** Listener in arc-systems ns:
Name": "foxdeli", "repoName": "repository redacted", "workflowRef": "redacted", "workflowRunId": 9162243747, "jobDisplayName": "PR / Run integration tests", "requestId": 11376}
2024-05-21T14:26:16Z    INFO    listener-app.worker.kubernetesworker    Updating ephemeral runner with merge patch  {"json": "{\"status\":{\"jobDisplayName\":\"PR / Run integration tests\",\"jobRepositoryName\":\"foxdeli/redacted\",\"jobRequestId\":11376,\"jobWorkflowRef\":\"redacted, not relevant\",\"workflowRunId\":9162243747}}"}
2024-05-21T14:26:16Z    INFO    listener-app.worker.kubernetesworker    Ephemeral runner status updated with the merge patch successfully.
2024-05-21T14:26:16Z    INFO    listener-app.worker.kubernetesworker    Created merge patch json for EphemeralRunnerSet update  {"json": "{\"spec\":{\"replicas\":2}}"}
2024-05-21T14:26:16Z    INFO    listener-app.worker.kubernetesworker    Scaling ephemeral runner set    {"assigned job": 1, "decision": 2, "min": 1, "max": 30, "currentRunnerCount": -1}
2024-05-21T14:26:16Z    INFO    listener-app.worker.kubernetesworker    Ephemeral runner set scaled.    {"namespace": "arc-runners", "name": "dind-tools-prod-europe-west3-gc62k", "replicas": 2}

^^^^^^^^^^^^^^^^^^^^^ so it scales-up the set, but nothing is going to happen.

I have to say we're facing these issues for past few weeks. Any idea what's going on?

nikola-jokic commented 1 month ago

Hey @zdenekpizl-foxdeli,

Just to make sure I understand, the job lands on the runner, but it never finishes?

zdenekpizl-foxdeli commented 1 month ago

Hi, it even never starts. There is just a message about the run that it is about to start but that's all. job

zdenekpizl-foxdeli commented 1 month ago

I am all for debug it thoroughly, but I have no idea what else to debug on my side. Please, propose the RootCauseAnalysis hunting approach for this issue ...

zdenekpizl-foxdeli commented 1 month ago

Another observations (I've redeployed self-hosted runners and controller once again, just to be sure everything is clean and installed neatly):

19:32 $ k get pods -n arc-runners
NAME                                                READY   STATUS    RESTARTS   AGE
dind-tools-prod-europe-west3-d7bzs-runner-227zh     2/2     Running   0          89m
^^^^^^^^^^^^^ min count of runners is 1, this one is there from the beginning

dind-tools-prod-europe-west3-d7bzs-runner-wsjv9     2/2     Running   0          79s
^^^^^^^^^^^^^ this one has been created just after a workflow started

k8s-tools-prod-europe-west3-dtk8p-runner-8bklr      1/1     Running   0          35m
nodejs-tools-prod-europe-west3-jkg5p-runner-hsc6w   2/2     Running   0          89m
nodejs-tools-prod-europe-west3-jkg5p-runner-w9r8z   2/2     Running   0          3m21s

According to the output within Web UI, the job is waiting for:

Job is waiting for a runner from 'dind-tools-prod-europe-west3' to come online.
Job is about to start running on the runner: dind-tools-prod-europe-west3-d7bzs-runner-227zh (organization)

So there is new runner/worker for DIND created but the job is assigned to the old one. And of course, it does not even start much less run or finish.

Interestingly, the Kubernetes mode (k8s- prefixed runners) works fine, no issues during invocation of workflows/jobs. Why there is such dichotomy?

nikola-jokic commented 1 month ago

Hey @zdenekpizl-foxdeli,

I'm not sure, is it possible that docker or something kills the runner? It must not be an ARC issue, since the pod is up, so it scales and the runner takes the job. What I'm not sure is what causes the runner to exit? What is the pod lifecycle after the job is accepted by the runner?

zdenekpizl-foxdeli commented 1 month ago

Hmm, that's good question and I have no answer for it. I found some traces that the container in a pod has been restarted, but did not find reason. So maybe resource exhaustion, who knows ...

Anyway, my problem has disappeared because I uninstalled the ARC's K8S helm chart, deleted CRDs, namespaces and other related leftovers from the cluster. Also, I've re-created runners group in Organization's settings and then installed clean ARC version 0.8.3. Now it works like a charm.

So I would say there was some mixture of troubles resulting in non-functional deployment.

nikola-jokic commented 1 month ago

Hey @alecor191,

Thank you for writing updates! I think this issue is now safe to close. We can always re-open it, or create a new one. And I want to thank you all for providing more information!

krupakar1329 commented 1 month ago

@nikola-jokic the issue seems to be appearing again