Open linustannnn opened 6 months ago
Hello! Thank you for filing an issue.
The maintainers will triage your issue shortly.
In the meantime, please take a look at the troubleshooting guide for bug reports.
If this is a feature request, please review our contribution guidelines.
you probably have to nest the spec under template
. I.e.
template:
spec:
containers:
- name: runner
imagePullPolicy: Never
image: github-runner:latest
command: ["/home/runner/run.sh"]
just a guess as I can't see the rest of your values.yaml
yeah it's nested already:
cat ~/arc-configuration/runner-scale-set/values.yaml | tail -n 100
# - name: side-car
# image: example-sidecar
## template is the PodSpec for each runner Pod
## For reference: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec
template:
## template.spec will be modified if you change the container mode
## with containerMode.type=dind, we will populate the template.spec with following pod spec
## template:
## spec:
## initContainers:
## - name: init-dind-externals
## image: ghcr.io/actions/actions-runner:latest
## command: ["cp", "-r", "-v", "/home/runner/externals/.", "/home/runner/tmpDir/"]
## volumeMounts:
## - name: dind-externals
## mountPath: /home/runner/tmpDir
## containers:
## - name: runner
## image: ghcr.io/actions/actions-runner:latest
## command: ["/home/runner/run.sh"]
## env:
## - name: DOCKER_HOST
## value: unix:///var/run/docker.sock
## volumeMounts:
## - name: work
## mountPath: /home/runner/_work
## - name: dind-sock
## mountPath: /var/run
## - name: dind
## image: docker:dind
## args:
## - dockerd
## - --host=unix:///var/run/docker.sock
## - --group=$(DOCKER_GROUP_GID)
## env:
## - name: DOCKER_GROUP_GID
## value: "123"
## securityContext:
## privileged: true
## volumeMounts:
## - name: work
## mountPath: /home/runner/_work
## - name: dind-sock
## mountPath: /var/run
## - name: dind-externals
## mountPath: /home/runner/externals
## volumes:
## - name: work
## emptyDir: {}
## - name: dind-sock
## emptyDir: {}
## - name: dind-externals
## emptyDir: {}
######################################################################################################
## with containerMode.type=kubernetes, we will populate the template.spec with following pod spec
## template:
## spec:
## containers:
## - name: runner
## image: ghcr.io/actions/actions-runner:latest
## command: ["/home/runner/run.sh"]
## env:
## - name: ACTIONS_RUNNER_CONTAINER_HOOKS
## value: /home/runner/k8s/index.js
## - name: ACTIONS_RUNNER_POD_NAME
## valueFrom:
## fieldRef:
## fieldPath: metadata.name
## - name: ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER
## value: "true"
## volumeMounts:
## - name: work
## mountPath: /home/runner/_work
## volumes:
## - name: work
## ephemeral:
## volumeClaimTemplate:
## spec:
## accessModes: [ "ReadWriteOnce" ]
## storageClassName: "local-path"
## resources:
## requests:
## storage: 1Gi
spec:
containers:
- name: runner
imagePullPolicy: Never
image: github-runner:latest
command: ["/home/runner/run.sh"]
## Optional controller service account that needs to have required Role and RoleBinding
## to operate this gha-runner-scale-set installation.
## The helm chart will try to find the controller deployment and its service account at installation time.
## In case the helm chart can't find the right service account, you can explicitly pass in the following value
## to help it finish RoleBinding with the right service account.
## Note: if your controller is installed to only watch a single namespace, you have to pass these values explicitly.
# controllerServiceAccount:
# namespace: arc-system
# name: test-arc-gha-runner-scale-set-controller
and it's still trying to pull the image when it's already locally built
how did you build the image and where did you push it? I just tried this out myself via:
template:
spec:
containers:
- name: runner
image: my-private-registry.io/my-image
and it works for me.
@linustannnn what's your kubernetes implementation?
With minikube, I use minikube image build
. I've also had various degrees of success with minikube image load
.
yeah i used minikube as well, called minikube docker-env
and then docker build . -t github-runner:latest
, but i got the error in the original comment. I then used helm install
to apply the config. If I pushed it to my private repo and pulled it then it works @geekflyer, but I'm wondering if I can use an image that has been locally built. Does minikube image build
work? @hicksjacobp
@linustannnn I had the same issue with this and it turns out when you start specifiying your own spec block, you need to delete/comment out the containerMode.type/containerMode block. So don't set it to dind or kubernetes, just toss out the entire thing, because it overwrites any custom config with its own template. Very counterinituitive.
What would you like added?
A clear and concise description of what you want to happen.
I want to use my custom image which i built locally. I've checked that the image exists locally using
docker ps
and I tried to override the imagePullPolicy in the runner-scale-set/values.yaml to Never. But it's still trying to pull images from the registry.Note: Feature requests to integrate vendor specific cloud tools (e.g.
awscli
,gcloud-sdk
,azure-cli
) will likely be rejected as the Runner image aims to be vendor agnostic.Why is this needed?
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.