actions / actions-runner-controller

Kubernetes controller for GitHub Actions self-hosted runners
Apache License 2.0
4.68k stars 1.11k forks source link

RunnerSet Runners fail to start with "RUNNER_NAME must be set" after upgrade to 0.22.1 #1305

Closed tbomberg closed 2 years ago

tbomberg commented 2 years ago

Describe the bug After the upgrade to the ARC to 0.22.1 (reproduced with 0.22.2) (ChartVersion 0.17.1/0.17.2), we noticed that newly created pods from our RunnerSet fail to start and shows the error message in the logs:

RUNNER_NAME must be set

I did rollback the controller to 0.22.0 (0.17.0) and the RunnerSets are starting like normal.

RunnerDeployments are working in all versions.

Checks

To Reproduce

  1. Install actions-runner-controller chart 0.17.1 with the following values. The pre-created secret holds the credentials from the GitHub App that is registered in the organisation.

    authSecret:
    name: controller-manager
    scope:
    singleNamespace: true
    image:
    pullPolicy: Always
    resources:
    requests:
    cpu: 10m
    memory: 128Mi
    limits:
    cpu: 1
    memory: 256M
  2. Create a RunnerSet with this manifest:

    apiVersion: actions.summerwind.dev/v1alpha1
    kind: RunnerSet
    metadata:
    name: vpc-devservices-runnerset
    namespace: asys-vpc-github-runner
    spec:
    ephemeral: false
    organization: our-organisation
    labels:
    - vpc-devservices-runnerset
    replicas: 1
    selector:
    matchLabels:
      app: vpc-devservices-runnerset
    serviceName: vpc-devservices-runnerset
    template:
    metadata:
      labels:
        app: vpc-devservices-runnerset
    spec:
      containers:
        - name: runner
          resources:
            requests:
              memory: 500Mi
              cpu: 10m
            limits:
              memory: 2Gi
              cpu: 1
  3. runner container exits with RC=1 and the following log:

    Waiting until Docker is avaliable or the timeout is reached
    CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
    Github endpoint URL https://github.com/
    RUNNER_NAME must be set

    Log of the manager container in ARC:

    I0405 15:05:33.209132       1 request.go:665] Waited for 1.033433053s due to client-side throttling, not priority and fairness, request: GET:https://10.240.16.1:443/apis/flowcontrol.apiserver.k8s.io/v1beta1?timeout=32s
    2022-04-05T15:05:33Z    INFO    controller-runtime.metrics  Metrics server is starting to listen    {"addr": "127.0.0.1:8080"}
    2022-04-05T15:05:33Z    INFO    actions-runner-controller   Initializing actions-runner-controller  {"github-api-cache-duration": "9m50s", "sync-period": "10m0s", "runner-image": "summerwind/actions-runner:latest", "docker-image": "docker:dind", "common-runnner-labels": null, "watch-namespace": "asys-vpc-github-runner"}
    2022-04-05T15:05:33Z    INFO    controller-runtime.builder  Registering a mutating webhook  {"GVK": "actions.summerwind.dev/v1alpha1, Kind=Runner", "path": "/mutate-actions-summerwind-dev-v1alpha1-runner"}
    2022-04-05T15:05:33Z    INFO    controller-runtime.webhook  Registering webhook {"path": "/mutate-actions-summerwind-dev-v1alpha1-runner"}
    2022-04-05T15:05:33Z    INFO    controller-runtime.builder  Registering a validating webhook    {"GVK": "actions.summerwind.dev/v1alpha1, Kind=Runner", "path": "/validate-actions-summerwind-dev-v1alpha1-runner"}
    2022-04-05T15:05:33Z    INFO    controller-runtime.webhook  Registering webhook {"path": "/validate-actions-summerwind-dev-v1alpha1-runner"}
    2022-04-05T15:05:33Z    INFO    controller-runtime.builder  Registering a mutating webhook  {"GVK": "actions.summerwind.dev/v1alpha1, Kind=RunnerDeployment", "path": "/mutate-actions-summerwind-dev-v1alpha1-runnerdeployment"}
    2022-04-05T15:05:33Z    INFO    controller-runtime.webhook  Registering webhook {"path": "/mutate-actions-summerwind-dev-v1alpha1-runnerdeployment"}
    2022-04-05T15:05:33Z    INFO    controller-runtime.builder  Registering a validating webhook    {"GVK": "actions.summerwind.dev/v1alpha1, Kind=RunnerDeployment", "path": "/validate-actions-summerwind-dev-v1alpha1-runnerdeployment"}
    2022-04-05T15:05:33Z    INFO    controller-runtime.webhook  Registering webhook {"path": "/validate-actions-summerwind-dev-v1alpha1-runnerdeployment"}
    2022-04-05T15:05:33Z    INFO    controller-runtime.builder  Registering a mutating webhook  {"GVK": "actions.summerwind.dev/v1alpha1, Kind=RunnerReplicaSet", "path": "/mutate-actions-summerwind-dev-v1alpha1-runnerreplicaset"}
    2022-04-05T15:05:33Z    INFO    controller-runtime.webhook  Registering webhook {"path": "/mutate-actions-summerwind-dev-v1alpha1-runnerreplicaset"}
    2022-04-05T15:05:33Z    INFO    controller-runtime.builder  Registering a validating webhook    {"GVK": "actions.summerwind.dev/v1alpha1, Kind=RunnerReplicaSet", "path": "/validate-actions-summerwind-dev-v1alpha1-runnerreplicaset"}
    2022-04-05T15:05:33Z    INFO    controller-runtime.webhook  Registering webhook {"path": "/validate-actions-summerwind-dev-v1alpha1-runnerreplicaset"}
    2022-04-05T15:05:33Z    INFO    controller-runtime.webhook  Registering webhook {"path": "/mutate-runner-set-pod"}
    2022-04-05T15:05:33Z    INFO    actions-runner-controller   starting manager
    2022-04-05T15:05:33Z    INFO    controller-runtime.webhook.webhooks Starting webhook server
    2022-04-05T15:05:33Z    INFO    controller-runtime.certwatcher  Updated current TLS certificate
    2022-04-05T15:05:33Z    INFO    controller-runtime.webhook  Serving webhook server  {"host": "", "port": 9443}
    2022-04-05T15:05:33Z    INFO    Starting server {"path": "/metrics", "kind": "metrics", "addr": "127.0.0.1:8080"}
    2022-04-05T15:05:33Z    INFO    controller-runtime.certwatcher  Starting certificate watcher
    I0405 15:05:34.069226       1 leaderelection.go:248] attempting to acquire leader lease asys-vpc-github-runner/actions-runner-controller...
    I0405 15:05:54.657952       1 leaderelection.go:258] successfully acquired lease asys-vpc-github-runner/actions-runner-controller
    2022-04-05T15:05:54Z    DEBUG   events  Normal  {"object": {"kind":"ConfigMap","namespace":"asys-vpc-github-runner","name":"actions-runner-controller","uid":"01ec3990-6bd8-4257-a590-8693dfbe60ad","apiVersion":"v1","resourceVersion":"91592721"}, "reason": "LeaderElection", "message": "asys-vpc-actions-runner-controller-7bbb657bb4-jztvd_0b453fca-bf0e-447b-b38f-50fb76d9218b became leader"}
    2022-04-05T15:05:54Z    DEBUG   events  Normal  {"object": {"kind":"Lease","namespace":"asys-vpc-github-runner","name":"actions-runner-controller","uid":"540b0336-c7c6-44bb-95bd-6c5c68e6d739","apiVersion":"coordination.k8s.io/v1","resourceVersion":"91592722"}, "reason": "LeaderElection", "message": "asys-vpc-actions-runner-controller-7bbb657bb4-jztvd_0b453fca-bf0e-447b-b38f-50fb76d9218b became leader"}
    2022-04-05T15:05:54Z    INFO    controller.runner-controller    Starting EventSource    {"reconciler group": "actions.summerwind.dev", "reconciler kind": "Runner", "source": "kind source: *v1alpha1.Runner"}
    2022-04-05T15:05:54Z    INFO    controller.runner-controller    Starting EventSource    {"reconciler group": "actions.summerwind.dev", "reconciler kind": "Runner", "source": "kind source: *v1.Pod"}
    2022-04-05T15:05:54Z    INFO    controller.runner-controller    Starting Controller {"reconciler group": "actions.summerwind.dev", "reconciler kind": "Runner"}
    2022-04-05T15:05:54Z    INFO    controller.runnerdeployment-controller  Starting EventSource    {"reconciler group": "actions.summerwind.dev", "reconciler kind": "RunnerDeployment", "source": "kind source: *v1alpha1.RunnerDeployment"}
    2022-04-05T15:05:54Z    INFO    controller.runnerdeployment-controller  Starting EventSource    {"reconciler group": "actions.summerwind.dev", "reconciler kind": "RunnerDeployment", "source": "kind source: *v1alpha1.RunnerReplicaSet"}
    2022-04-05T15:05:54Z    INFO    controller.runnerdeployment-controller  Starting Controller {"reconciler group": "actions.summerwind.dev", "reconciler kind": "RunnerDeployment"}
    2022-04-05T15:05:54Z    INFO    controller.runnerreplicaset-controller  Starting EventSource    {"reconciler group": "actions.summerwind.dev", "reconciler kind": "RunnerReplicaSet", "source": "kind source: *v1alpha1.RunnerReplicaSet"}
    2022-04-05T15:05:54Z    INFO    controller.runnerreplicaset-controller  Starting EventSource    {"reconciler group": "actions.summerwind.dev", "reconciler kind": "RunnerReplicaSet", "source": "kind source: *v1alpha1.Runner"}
    2022-04-05T15:05:54Z    INFO    controller.runnerreplicaset-controller  Starting Controller {"reconciler group": "actions.summerwind.dev", "reconciler kind": "RunnerReplicaSet"}
    2022-04-05T15:05:54Z    INFO    controller.runnerpod-controller Starting EventSource    {"reconciler group": "", "reconciler kind": "Pod", "source": "kind source: *v1.Pod"}
    2022-04-05T15:05:54Z    INFO    controller.runnerpod-controller Starting Controller {"reconciler group": "", "reconciler kind": "Pod"}
    2022-04-05T15:05:54Z    INFO    controller.runnerset-controller Starting EventSource    {"reconciler group": "actions.summerwind.dev", "reconciler kind": "RunnerSet", "source": "kind source: *v1alpha1.RunnerSet"}
    2022-04-05T15:05:54Z    INFO    controller.runnerset-controller Starting EventSource    {"reconciler group": "actions.summerwind.dev", "reconciler kind": "RunnerSet", "source": "kind source: *v1.StatefulSet"}
    2022-04-05T15:05:54Z    INFO    controller.runnerset-controller Starting Controller {"reconciler group": "actions.summerwind.dev", "reconciler kind": "RunnerSet"}
    2022-04-05T15:05:54Z    INFO    controller.horizontalrunnerautoscaler-controller    Starting EventSource    {"reconciler group": "actions.summerwind.dev", "reconciler kind": "HorizontalRunnerAutoscaler", "source": "kind source: *v1alpha1.HorizontalRunnerAutoscaler"}
    2022-04-05T15:05:54Z    INFO    controller.horizontalrunnerautoscaler-controller    Starting Controller {"reconciler group": "actions.summerwind.dev", "reconciler kind": "HorizontalRunnerAutoscaler"}
    2022-04-05T15:05:54Z    INFO    controller.runner-controller    Starting workers    {"reconciler group": "actions.summerwind.dev", "reconciler kind": "Runner", "worker count": 1}
    2022-04-05T15:05:54Z    INFO    controller.runnerdeployment-controller  Starting workers    {"reconciler group": "actions.summerwind.dev", "reconciler kind": "RunnerDeployment", "worker count": 1}
    2022-04-05T15:05:54Z    INFO    controller.runnerpod-controller Starting workers    {"reconciler group": "", "reconciler kind": "Pod", "worker count": 1}
    2022-04-05T15:05:54Z    INFO    controller.runnerreplicaset-controller  Starting workers    {"reconciler group": "actions.summerwind.dev", "reconciler kind": "RunnerReplicaSet", "worker count": 1}
    2022-04-05T15:05:54Z    INFO    controller.runnerset-controller Starting workers    {"reconciler group": "actions.summerwind.dev", "reconciler kind": "RunnerSet", "worker count": 1}
    2022-04-05T15:05:54Z    INFO    controller.horizontalrunnerautoscaler-controller    Starting workers    {"reconciler group": "actions.summerwind.dev", "reconciler kind": "HorizontalRunnerAutoscaler", "worker count": 1}
    2022-04-05T15:06:24Z    DEBUG   actions-runner-controller.runnerset Created replica(s)  {"runnerset": "asys-vpc-github-runner/vpc-devservices-runnerset", "lastSyncTime": null, "effectiveTime": "<nil>", "templateHashDesired": "7bfb6646c9", "replicasDesired": 1, "replicasPending": 0, "replicasRunning": 0, "replicasMaybeRunning": 0, "templateHashObserved": [], "created": 1}
    2022-04-05T15:06:24Z    DEBUG   actions-runner-controller.runnerset Skipped reconcilation because owner is not synced yet   {"runnerset": "asys-vpc-github-runner/vpc-devservices-runnerset", "owner": "asys-vpc-github-runner/vpc-devservices-runnerset-24628", "pods": null}
  4. Delete the RunnerSet

  5. Rollback the ARC to Chart 0.17.0

  6. Recreate the RunnerSet => Runner Pods are starting, normally and registered at the GitHub organization

Expected behavior Runner Pods from the RunnerSet StatefulSets start up and register successfully in the GitHub organization

Environment (please complete the following information):

mumoshu commented 2 years ago

@tbomberg Would you mind sharing output from kubectl get po $POD for any runner pod that is managed by RunnerSet in your cluster? I think it's missing the said envvar and that's causing it. We need to figure out why the runner pod result in such state, because any StatefulSet created by RunnerSet should have a pod template that sets a RUNNER_SET envvar hence any runner pod should have it too. any runner pods managed by StatefulSet created by RunnerSet would have RUNNER_SET envvar thanks to ARC's mutating webhook.

tbomberg commented 2 years ago

@mumoshu Yes, that is the actual difference.

While the StatefulSetscreated from the 0.22.0 controller look the same like the one from the 0.22.1 controller. (The env-vars RUNNER_NAME and RUNNER_TOKEN are in neither of the STS pod templates), when I compare the Pods I can see that the needed env-vars are injected into the Pods only when to 0.22.0 controller is active.

This is the complete environment section of the Pod from the 0.22.1 controller:

  containers:
  - env:
    - name: RUNNER_ORG
      value: our-organization
    - name: RUNNER_REPO
    - name: RUNNER_ENTERPRISE
    - name: RUNNER_LABELS
      value: vpc-devservices-runnerset
    - name: RUNNER_GROUP
    - name: DOCKER_ENABLED
      value: "true"
    - name: DOCKERD_IN_RUNNER
      value: "false"
    - name: GITHUB_URL
      value: https://github.com/
    - name: RUNNER_WORKDIR
      value: /runner/_work
    - name: RUNNER_EPHEMERAL
      value: "false"
    - name: DOCKER_HOST
      value: tcp://localhost:2376
    - name: DOCKER_TLS_VERIFY
      value: "1"
    - name: DOCKER_CERT_PATH
      value: /certs/client
    - name: RUNNER_FEATURE_FLAG_EPHEMERAL
      value: "true"

It looks like the controller was injecting these vars directly into the pods, not via the sts template, but now this injections is not happening or not complete.

I will attach the complete manifests for both StatefulSet and Pod as files:

runner-sts-pod-0.22.0.yaml.txt runner-sts-pod-0.22.1.yaml.txt runner-sts-0.22.0.yaml.txt runner-sts-0.22.1.yaml.txt .

mumoshu commented 2 years ago

@tbomberg Hey! Thanks a lot for your detailed report. It did help investigate it fully.

So, this seems happening due to a "fix" on the chart. You'd need to update your values.yaml to accommodate that.

Here's the excerpt from your helm chart values:

scope:
  singleNamespace: true

Until chart v0.17.0, there was a bug in the chart that multiple instances of ARC interfere with each other on mutating and validating admission webhooks, even if you specified scope.singleNamespace.

In v0.17.1, we fixed it by including watchNamespace into the namespaceSelectors in admission webhook configs .

Do you actually have multiple instances of ARC on your cluster? Do you really need to restrict ARC's watch namespace? If not, you can just omit scope.sigleNamespace out of your values.yaml so that everything would start working again.

Otherwise, add scope.watchNamespace to your values.yaml. Assuming your only namespace that contains RunnerSet(hence statefulsets and runner pods) is asys-vpc-github-runner, it should look like:

scope:
  singleNamespace: true
  watchNamespace: asys-vpc-github-runner
tbomberg commented 2 years ago

Thanks for the investigation and the point to the area of the problem.

I tested this on my environment:

I tested this in [0.17.1, 0.17.2 and 0.17.3]

I can live with singleNamespace: false but it looks like this is not working for RunnerSets as expected.

mumoshu commented 2 years ago

The RunnerPods still do not get the RUNNER_NAME injected

Did you recreate the runner pod after you upgraded your helm release? The envvar is injected by the mutating webhook which basically means you need to recreate any "broken" runner pods (by kubectl deleteing the broken pods) to let it inject the runner name.

tbomberg commented 2 years ago

Did you recreate the runner pod after you upgraded your helm release? The envvar is injected by the mutating webhook which basically means you need to recreate any "broken" runner pods (by kubectl deleteing the broken pods) to let it inject the runner name.

Yes, I deleted the RunnerSet, uninstalled ARC, reinstalled it with the configuration to test and let the newly configured controller create the StatefulSets and Pods on its own.

mumoshu commented 2 years ago

@tbomberg Thanks for the info! Hmm, well it seems impossible. Would you also mind sharing your mutatingwebhookconfiguration that is installed via helm? I'm especially interested in if it has a correct namespace selector (for asys-vpc-github-runner as you seem to have runnersets and runner pods in that namespace). If it's there, the only cause would be your k8s cluster is somehow broken and not respecting the mutating webhook. If it isn't there, almost certainly you've missed something while upgrading the helm chart.

tbomberg commented 2 years ago

For me the generated namespaceSelectorslook just fine: mutatingwebhookconfig.yaml.txt

I updated the way to reproduce the problem to using

so it is easy to reproduce everywhere

Steps to reproduce

kind create cluster

helm repo add jetstack https://charts.jetstack.io
helm repo add actions-runner-controller https://actions-runner-controller.github.io/actions-runner-controller
helm repo update

# Install Cert-Manager
helm upgrade --install cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.8.0 \
  --set installCRDs=true

kubectl create namespace actions-runner-system

# Install the ARC App: https://github.com/settings/apps/new?url=http://github.com/actions-runner-controller/actions-runner-controller&webhook_active=false&public=false&administration=write&actions=read
# Obtain APP_ID INSTALLATION_ID and PRIVATE_KEY_FILE_PATH

# Create Secret

kubectl create secret generic controller-manager \
    -n actions-runner-system \
    --from-literal=github_app_id=${APP_ID} \
    --from-literal=github_app_installation_id=${INSTALLATION_ID} \
    --from-file=github_app_private_key=${PRIVATE_KEY_FILE_PATH}

helm upgrade --install --namespace actions-runner-system --create-namespace \
  --wait actions-runner-controller actions-runner-controller/actions-runner-controller \
  --set scope.singleNamespace=true --set scope.watchNamespace=actions-runner-system

export ARC_RUNNER_REPOSITORY=tbomberg/test-arc

cat <<EOF >kind-runnerset.yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerSet
metadata:
  name: kind-runnerset
  namespace: actions-runner-system
spec:
  ephemeral: false
  repository: ${ARC_RUNNER_REPOSITORY}
  labels:
  - kind-runnerset
  replicas: 1
  selector:
    matchLabels:
      app: kind-runnerset
  serviceName: kind-runnerset
  template:
    metadata:
      labels:
        app: kind-runnerset
EOF

kubectl apply -f kind-runnerset.yaml

# Catch the logs from the runner pod:
kubectl -n actions-runner-system logs -l app=kind-runnerset -c runner  -f

You have to be quick to get the logs from the pod. Since 0.17.3 the StatefulSet and Podis immediately removed and replaced upon any failure which makes debugging not easy.

Also in this setup I get:

$ (kind-kind:default) kubectl -n actions-runner-system logs -l app=kind-runnerset -c runner  -f 
2022-04-13 10:34:03.322  DEBUG --- Docker enabled runner detected and Docker daemon wait is enabled
2022-04-13 10:34:03.327  DEBUG --- Waiting until Docker is available or the timeout is reached
unable to resolve docker endpoint: open /certs/client/ca.pem: no such file or directory
unable to resolve docker endpoint: open /certs/client/ca.pem: no such file or directory
unable to resolve docker endpoint: open /certs/client/ca.pem: no such file or directory
unable to resolve docker endpoint: open /certs/client/ca.pem: no such file or directory
unable to resolve docker endpoint: open /certs/client/ca.pem: no such file or directory
unable to resolve docker endpoint: open /certs/client/ca.pem: no such file or directory
unable to resolve docker endpoint: open /certs/client/ca.pem: no such file or directory
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
2022-04-13 10:34:11.117  DEBUG --- Github endpoint URL https://github.com/
2022-04-13 10:34:11.121  ERROR --- RUNNER_NAME must be set
mumoshu commented 2 years ago

--set scope.watchNamespace=actions-runner-system seems definitely wrong. Assuming your RunnerDeployment had namespace of asys-vpc-github-runner, it should be --set scope.watchNamespace=asys-vpc-github-runner

tbomberg commented 2 years ago

As I wrote I change my setup to a kind cluster staying as much at the defaults as possible to reproduce.

I just want to keep my working setup running without having to tear it down for every question asked.

mumoshu commented 2 years ago

Thanks for that! But the fix here would be to change the helm chart value as I said.

The rationale is that since 0.22.0 RunnerDeployment relies on mutating webhook and the same runner pod management logic that backs RunnerSet, which requires RUNNER_NAME envvar to be set. watchNamespace needs to be configured properly to make the mutating webhook work, hence your issue.

In 0.21.x and below, RunnerDeployment didn't depend on the mutating webhook so that's why it worked with your wrong watchNamespace setting.

tbomberg commented 2 years ago

To what should I set the watchNamespace?

controller and runners are located in the very same namespace, here now: actions-runner-system

Is this not supported? I can check if separate namespaces make any difference.

tbomberg commented 2 years ago

It does not make a difference. After reconfiguring the controller to watch a separate namespace and trying to setup a RunnerSet there shows still the problem.

mumoshu commented 2 years ago

@tbomberg Ah sorry I missed that you changed the namespace of the example RunnerSet to actions-runner-system. It should align with scope.watchNamespace and the namespaceSelector of mutatingwebhookconfig so almost everything looks good now.

The last missing piece, which I just noticed, might be that we might have missed labeling the actions-runner-system namespace with name=actions-runner-system.

Rereading your mutatingwebhookconfig:

  namespaceSelector:
    matchLabels:
      name: actions-runner-system

This says that the namespace must be labeled with the name key, and a helm chart is unable to modify existing namespace to have that label so it must be done by you. Try kubectl label ns actions-runner-system name=actions-runner-system.

I wish there was any way to let a mutating webhook match the target namespace by name, but apparently there's no way 😢

tbomberg commented 2 years ago

@mumoshu Yes, i can confirm that all this was caused by the missing name label on the namespace

rxa313 commented 2 years ago

@mumoshu I just wanted to inform that this was not fixed in 0.24.1 or 0.25.2 -- in both cases I tried to upgrade from 0.21.1 to 0.25.2 after updating all crds and even wiping out my entire cluster of crds and uninstalling everything and installing from scratch to make sure.

I finally was able to stop my stateful set from constantly restarting after running: kubectl label ns actions-runner-system name=actions-runner-system

mumoshu commented 2 years ago

@rxa313 Thanks for reporting! Yes, neither ARC nor the chart labels the namespace automatically so I believe this is where we need to update the documentation (of perhaps our chart, next to the description for the watchNamespace and singleNamespace

userqjin commented 1 year ago

Hitting this issue as well. I am having multiple controller in same cluster with name space cicd--ci and cicd--cd as well as the the RunnetSet in same name spaces. After setting the watchNamespace and set the namespace properly.

k get namespace cicd--cd --show-labels
NAME       STATUS   AGE   LABELS
cicd--cd   Active   38d   kubernetes.io/metadata.name=cicd--cd,name=cicd--cd

For RunnerDeployment it works fine, however, not for RunnerSet. Hitting same issues as missing RUNNER_NAME and RUNNER_TOKEN. Any suggestion for multi controller in one cluster runs?

fernferret commented 1 year ago

I wish there was any way to let a mutating webhook match the target namespace by name, but apparently there's no way

@mumoshu I think we could use the well known label kubernetes.io/metadata.name:

  namespaceSelector:
    matchLabels:
      kubernetes.io/metadata.name: my-github-runners

According to the docs this is always set to the namespace name and is immutable. So by using this rather than name: ... things would hook up correctly I believe, that is unless the name: label is being used somewhere else.

If this is the only place name is used I'd think this would be the following change in webhook_configs.yaml:

  namespaceSelector:
    matchLabels:
      kubernetes.io/metadata.name: {{ default .Release.Namespace .Values.scope.watchNamespace }}