Closed willchan closed 1 year ago
Hey @willchan,
Thank you for raising this issue and using runner-container-hooks
! I would say that using pods really depends on your use case.
If the job's pod fails, do we really need to restart it? What if the run-script-step
kills the container? Does it make sense to re-run the job container that will be cleaned up after the job?
The idea is to use the simplest resource that should exist during the execution of a hook. If the job pod fails, we should fail the job. And also, the idea of k8s job resource is to execute something. What we really need here is a place where we can attach and execute commands within.
But maybe I am wrong.
But on the other hand, you can always customize this hook the way you see fit.
The Runner Container Hooks repo provides a set of packages that implement the container hooks feature in the actions/runner. These can be used as is, or you can use them as a guide to implement your own hooks.
However, I will discuss this issue with the team and let you know as soon as we decide how to proceed :relaxed:
Thanks for the response @nikola-jokic. I see this issue is more nuanced than I had anticipated. I wrote this as a new user of ARC, and was (and am still) unfamiliar with the internals of ARC and runner-container-hooks
. From my perspective, it's a poor user experience to have scheduled k8s pods that fail to ever run cause my GitHub job to fail. I'm not really clear on the best way to address this issue however.
Thank you for raising this issue and using
runner-container-hooks
! I would say that using pods really depends on your use case. If the job's pod fails, do we really need to restart it? What if therun-script-step
kills the container? Does it make sense to re-run the job container that will be cleaned up after the job? The idea is to use the simplest resource that should exist during the execution of a hook. If the job pod fails, we should fail the job. And also, the idea of k8s job resource is to execute something. What we really need here is a place where we can attach and execute commands within. But maybe I am wrong.
I can see an argument for using pods directly instead of k8s jobs here. That said, I don't know if the job pod fails, we should always fail the job. In my scenario, the job pod gets scheduled to a k8s node, and the kubelet rejects it before it ever gets a chance to run. This manifests as a pod failure, but since the pod never ran, it hasn't really failed, per se. It's more an artifact of k8s that they are forcing the user to retry this pod, or use a k8s job instead. Note the particular failure Pod failed to come online with error
is at:
Hey @willchan,
No problem, I am happy to discuss it. As noted in Readme, this is just a basic implementation. If your cluster is not in a state where you can actually have it like this, you can customize it. One way you can also tackle this issue is, schedule your runners to run on nodes that are not running your applications. The issue would still persist if we change it to the Job, since we are having a nodeName
being the same node where the runner is executing (see https://github.com/actions/runner-container-hooks/blob/d988d965c57642f972246e28567301f6b4c054e1/packages/k8s/src/k8s/index.ts#L130). The reason for that is volume mounts.
It would be rather difficult or impossible to have a hook working on every cluster in the world. There are so many variables that can affect its execution. That is why we provided this repository to serve as a base.
In most use-cases, you should not have issues executing the hook as is. But if your cluster is in a state where it is not possible, consider changing the way you are scheduling the runner pods to run on nodes running other containers, or changing the hook implementation itself :relaxed:
I think I hear that you are saying that this is beyond the scope of what runner-container-hooks should address since it's just a basic implementation, which seems reasonable to me. I'll go back to ARC and see what they have to say. While your suggestions sound reasonable at first blush, I am not qualified to evaluate them. The suggestion about tweaking the cluster might be reasonable, but I am deliberately trying to simplify our cluster administration by using vanilla GKE autopilot configurations. So I am loathe to tweak them. Maybe ARC will consider changing the hook implementation itself. Thanks for sharing your thoughts on this.
I mean, to change the hook in the ARC, the issue would still persist, since it would be related to your use-case, but not the generic one.
I think that having the basic hook is the right approach, and to change the implementation, the only thing you really need to change is this Dockerfile. https://github.com/actions-runner-controller/actions-runner-controller/blob/fe05987eea33140abc11f70e49a8486b0dcd8cdd/runner/actions-runner.ubuntu-22.04.dockerfile#L64 Here, you don't pull the hook, you copy your own implementation build to the assets directory. And that is pretty much it. After you are done, you push your own image to your container registry and specify the image for the resource.
I originally filed this bug with ARC(https://github.com/actions-runner-controller/actions-runner-controller/issues/2031#issuecomment-1324354979) but got redirected here. Kubernetes has a known race where Pods can get scheduled on nodes, but then get rejected by the kubelet (for example, for
OutOfcpu
). As noted in the comment thread, using plain Pods is an anti-pattern. It's preferable to use Jobs, so they can handle retries as needed.