actions / runner-container-hooks

Runner Container Hooks for GitHub Actions
MIT License
67 stars 43 forks source link

ACTIONS_RUNNER_CONTAINER_HOOK_TEMPLATE warns about container name #116

Closed wasd171 closed 9 months ago

wasd171 commented 9 months ago

Hello!

I am using ACTIONS_RUNNER_CONTAINER_HOOK_TEMPLATE with the following content (mildly redacted):

apiVersion: v1
kind: PodTemplate
metadata:
  annotations:
    karpenter.sh/do-not-evict: "true"
spec:
  serviceAccountName: xxx-arc-runners-sa
  securityContext:
    fsGroup: 123
  containers:
    - name: "$job"
      resources:
        requests:
          memory: 4Gi
          cpu: 1

It gets applied successfully, but on every run I am getting this: Warning: Skipping name override: name can't be overwritten

I am using $job as a container name, so I would expect to see no warning

Wielewout commented 9 months ago

This should be fixed as part of https://github.com/actions/runner-container-hooks/pull/111. It's not included in a new release yet though :smile:

nikola-jokic commented 9 months ago

We will issue a new release as soon as we add #115 It is mentioned that container steps in kubernetes mode may break on argument split, so I'll do my best to include that one as well, and we can create a new release :relaxed:

adiroiban commented 9 months ago

Thanks Nikola for the ADR 0096 implemnetation. It works well :)

Was ADR 0096 accepted ?

I can see it's still "proposed" https://github.com/actions/runner-container-hooks/blob/main/docs/adrs/0096-hook-extensions.md

Rather than having it implemented as a file I was expecing that we can have it loaded from a ConfigMap (so that we don't need a new CRD) and have it defined as part of the helm chart.

nikola-jokic commented 9 months ago

Hey @adiroiban,

Thanks! It is accepted, we should update the status. You can use config map, that was the intention.

  1. In the runner pod spec, you leave containerMode empty (for simplicity, it can be done with container mode as well)
  2. Take the expanded definition for a runner pod that is commented out and apply it
  3. Add your config map to template.spec.volumes
  4. Add a mount to template.spec.containers[runner].volumeMounts and specify a path on the runner
  5. Set env that tells the hook where the extension file is

Lastly, we decided not to include it as part of the helm chart. The runner can be configured in many ways, and ARC should not create many dependencies with the hook. Mostly, we included stuff that the controller should react on (like cleanup for service accounts and cleanup on uninstall). Let's say you want to specify a hook with the extension so you can comply with the cluster rules (e.g. securityContext must be set for each pod). With the decoupling, you can create a single config map with extension rules, and provide it to the runner for each scale set deployment. If we included it in the scale-set helm, we would create as many config maps as you have scale sets. This is not to say that in the future, it may be included in scale set helm chart. I'm just saying that the hook is not intended to be only ARC extension. It is an extension of the runner, so we are trying not to tie the implementation of ARC too much with the hook itself, but rather to provide a guide how it can be accomplished :relaxed:

adiroiban commented 9 months ago

Thanks for the info.

I am looking to see what is needed to update the official documentation at https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/deploying-runner-scale-sets-with-actions-runner-controller#using-kubernetes-mode

I will try to find some time and update that doc page.


I understand the decoupling and it makes sense.

But maybe for ease of use, the helm chart can be provided as a turn-key solution.

I was expecting to having it defined in the chart maybe somewhere as containerMode.podTemplates with the content of the ConfigMap and this can auto-generate a config map for each runner set.


I think that another main use case is to setup the hostedtoolcache persistent volume.

I am happy to have the helm as "light" as possible so that it's easy to use it with Windows runners.

nikola-jokic commented 9 months ago

Closing as the new release is out :relaxed:

johnjeffers commented 8 months ago

I can't find any instructions on how to actually use this. I understand the new release is out but I'm still getting the skipping name override error and I'm on the latest version 0.7.0 of the runner scale set controller.

nikola-jokic commented 8 months ago

Hey @johnjeffers,

Just to clarify, ARC version is not tied in any way with the hook. The runner is. The reason you are still seeing this error is that the runner is not yet released with the latest version of the hook :disappointed: We are planning to add docs update of how to setup hook extension, but we are exploring how to make the setup easier to use.

Right now, if you are planning on configuring the extension with the ConfigMap, you would have to:

  1. Create config map with the extension content
  2. Comment out containerMode field and expand the containerMode kubernetes spec
  3. Add volume mount referencing the config map
  4. Set the env pointing to the file mounted from the config map
johnjeffers commented 8 months ago

I'll try that, and in the meantime let me explain my use case, which I think is a very common one. I want to accomplish 3 things.

  1. I want to self-host runners in kubernetes to take advantage of easy autoscaling, to keep the build process inside my own VPC, and to leverage the IAM that's built in to AWS instances so I don't have to store authentication secrets elsewhere.
  2. I want to use my own image, so I can preload software that I need, rather than installing the software every time the pipeline runs.
  3. I want to be able to run docker build and push operations using that custom image.

I have 1 and 2 figured out, but 3 has been a real challenge, much more difficult than I expected it to be. I thought that using a pod template and adding a DinD sidecar would be the solution, which is what led me to this issue.

So, to confirm, is this something I'll be able to do using the instructions you just gave me?

nikola-jokic commented 8 months ago

Oh, you can't really build images using k8s hook. You can use the docker hook, but we did not implement special option handling. However, if your workflow file specifies docker options, they should be re-used. Hook can only build images if you use kaniko, which is not officially supported but there is a way to make it work. If you are okay with dind running in privileged mode, runner without docker hook can also work. If you want to customize the way docker commands are invoked, you can use our docker hook as a starting point, build it and add it to the image.

It is not as easy getting kaniko builds as it would be running docker hook or simply adding a dind side-car and having runner execute docker commands. Please let me know if you have any trouble.

johnjeffers commented 8 months ago

I'm fine with running dind in privileged mode, I just cannot find any clear instructions on how to actually do this. I just want to be able to run a docker build and a docker push with my custom runner image, as I mentioned above. Maybe I'm asking this question in the wrong repo, and it has nothing to do with container hooks, I have no idea. Trying to get this working has been incredibly frustrating, and if documentation on how to do this exists, I haven't found it. Are you able to point me to something that clearly explains how to do what I'm trying to do?

nikola-jokic commented 8 months ago

Docs how to setup ARC in dind mode: https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/deploying-runner-scale-sets-with-actions-runner-controller#using-docker-in-docker-or-kubernetes-mode-for-containers Docs how to customize dind is going to be published soon I hope, but you can see everything here