Closed alexandrst88 closed 4 years ago
@alexandrst88 Hey! I believe this being a valid feature request. Probably adding annotations
next to https://github.com/summerwind/actions-runner-controller/blob/0edf0d59f74bcbbc7c9bd899c6547c2a49a9b1b5/api/v1alpha1/runner_types.go#L34 would make sense.
cc/ @summerwind
Thank you for the feedback! That's nice idea! I'm also considering a similar feature. In my use case, I'd like to set Resource Limits.
However, managing each field of a pod individually can be complicated. How about adding a podTemplate field to Runner?
apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner
metadata:
name: example
spec:
repository: summerwind/actions-runner-controller
podTemplate:
metadata:
annotations:
iam.amazonaws.com/role: role-arn
spec:
serviceAccountName: example
...
I'm currently working on StatefulSet based runner management in #4. After that, it might be a good time to implement podTemplate
.
I might me wrong, but in runner_controller.go
pod := corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: runner.Name,
Namespace: runner.Namespace,
},
Spec: corev1.PodSpec{
RestartPolicy: "OnFailure",
Containers: []corev1.Container{
{
Name: containerName,
Image: runnerImage,
ImagePullPolicy: "Always",
Env: env,
VolumeMounts: []corev1.VolumeMount{
{
Name: "docker",
MountPath: "/var/run",
},
},
SecurityContext: &corev1.SecurityContext{
RunAsGroup: &group,
},
},
{
Name: "docker",
Image: r.DockerImage,
VolumeMounts: []corev1.VolumeMount{
{
Name: "docker",
MountPath: "/var/run",
},
},
SecurityContext: &corev1.SecurityContext{
Privileged: &privileged,
},
},
},
Volumes: []corev1.Volume{
corev1.Volume{
Name: "docker",
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
},
},
},
},
}
there is one way to create the pod, where pod spec is just kind of hardcoded some fields like ImagePullPolicy, or RestartPolicy, if you defined,
repository: summerwind/actions-runner-controller
podTemplate:
metadata:
annotations:
iam.amazonaws.com/role: role-arn
spec:
serviceAccountName: example
...
you need also specify all fields here all fields from the previous code snippet, but, it would be create to deepcopy or merge fields from pod template into pod creation above?
like podTemplate.Metadata.DeepCopy(pod.Metadata) and podTemplate.Spec.DeepCopy(pod.Spec)
you need also specify all fields here all fields from the previous code snippet, but, it would be create to deepcopy or merge fields from pod template into pod creation above?
like podTemplate.Metadata.DeepCopy(pod.Metadata) and podTemplate.Spec.DeepCopy(pod.Spec)
Yeah, that would work!
We'd ideally want "strategic-merge-patch" on containers, volumes, and volumeMounts. But that seems like a YAGNI and we can defer it to another github issue/feature request.
So, in the meantime, I believe you can just deepcopy the pod template into a new pod spec, and set default values to the missing fields.
For instance, when the pod template in the runner spec misses RestartPolicy
, you'd set it to OnFailure
as we already do. Similarly, if the pod template misses Containers[]
, you'd set it to the two containers as already do.
Funny thing, DeepCopy it's actually removed existing ones, fields like it wiped out containers fields if I choose it in annotations.
Second, one Kubernetes also checking embedded fields spec, like if you define podTemplate.
podTemplate:
metadata:
annotations:
iam.amazonaws.com/role: role-arn
template:
spec:
serviceAccountName: testing
Kuberenetes api validation force you also specify container fileds :)
lol, it seems like here https://github.com/elastic/cloud-on-k8s/issues/1822, they removed validation from CRD.
what do you think about just pod creation into CRD within runner?
apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner
metadata:
or just skip validation like in https://github.com/elastic/cloud-on-k8s, because creation of pod will be control by api validation too.
it will hard to maintain of fields of PodTemplate, because k8s will update API and add or remove f fields.
wow, looking forward for this feature as well, i'd like to set iam role to the runner pod with podTemplate.
what's the status of PR #7? i can contribute as well.
thanks again for the nice operator.
re-comment https://github.com/summerwind/actions-runner-controller/issues/5#issuecomment-598324941
I think it depends on what's the principle we want to follow, 1) either minimize the custom podTemplate for user, or 2) always enforce the user to explicitly specify the full runner podTemplate. i'd prefer option 2, since it makes people very clear on what's containers spec they are using. Ex. what's the version of runner docker image, whether need extra docker container, what's the runner container resource spec, etc. just my 2 cents.
Thanks, Nick
BTW, @alexandrst88 , regards to the pod spec merge logic, yes, if you copy the custom podTemplate into the controller pod
instance, the containers spec gets lost.
so I think we can implement the merge logic in other way, saying merge controller managed pod instance to the custom pod.
something like this (there is still space to improve the code below):
// copy custom podTemplate
pod := corev1.Pod{}
if runner.Spec.PodTemplate.ObjectMeta.Size() > 0 {
runner.Spec.PodTemplate.ObjectMeta.DeepCopyInto(&pod.ObjectMeta)
}
if runner.Spec.PodTemplate.Template.Spec.Size() > 0 {
runner.Spec.PodTemplate.Template.Spec.DeepCopyInto(&pod.Spec)
}
// merge runner specific pod spec
pod.ObjectMeta.Name = runner.Name
if pod.ObjectMeta.Namespace == "" {
pod.ObjectMeta.Namespace = runner.Namespace
}
if len(pod.Spec.Containers) == 0 {
pod.Spec.Containers = []corev1.Container{
{
Name: containerName,
Image: runnerImage,
ImagePullPolicy: "Always",
Env: env,
VolumeMounts: []corev1.VolumeMount{
{
Name: "docker",
MountPath: "/var/run",
},
},
SecurityContext: &corev1.SecurityContext{
RunAsGroup: &group,
},
},
{
Name: "docker",
Image: r.DockerImage,
VolumeMounts: []corev1.VolumeMount{
{
Name: "docker",
MountPath: "/var/run",
},
},
SecurityContext: &corev1.SecurityContext{
Privileged: &privileged,
},
},
}
pod.Spec.Volumes = append(pod.Spec.Volumes, []corev1.Volume{
corev1.Volume{
Name: "docker",
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
},
},
}...)
pod.Spec.RestartPolicy = "OnFailure"
} else {
// append runner specific env vars
for i := 0; i < len(pod.Spec.Containers); i++ {
pod.Spec.Containers[i].Env = append(pod.Spec.Containers[i].Env, env...)
}
}
yeap, @missedone thanks, it makes sense. about container field, I think, kustomize could remove this field as required in crd.
Does anyone have a real use case that needs adding a sidecar to the runner pod or replacing the runner container? If not, I'd like to choose a way to just ignore the containers
field in the pod template to keep controller simple.
Yes, for example, I’d like to build golang app, but turns out the default runner image doesn’t have make installed. Also we have serval teams build different type of tech stack, and different requirements of tool dependency. That says I’d like to give the team flexibility to custom the runner docker image.
Thank you for sharing your use case!
If you want to use custom runner image, you can use the image
field ;-)
Currently the controller watches the runner's pod state and restarts the pod after actions' job is completed. This is depends on the container name. If we introduce a Pod Template, the restart process could be difficult because the container name will be dynamic value.
To solve this issue, I propose the following solution:
annotations
and labels
are inherited to their 'Pod' resource.serviceAccountName
field to allow specifying Pod service accountThe manifest based on the proposal would be:
apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner
metadata:
name: example
annotations:
iam.amazonaws.com/role: role-arn
spec:
repository: summerwind/actions-runner-controller
serviceAccountName: example
image: your/runner:latest
Yeap, in this case, we need to maintain pod spec fields under runner spec :) Like, requests/limits and etc.
I generally agree with @summerwind's proposal https://github.com/summerwind/actions-runner-controller/issues/5#issuecomment-599968643 :)
The good part of it is that we can gradually extend it to add more features(dedicated issues would be great), while making it clear that it doesn't necessarily support all the pod spec.
Yeap, in this case, we need to maintain pod spec fields under runner spec :) Like, requests/limits and etc.
I believe this is a valid feature request.
Can we assume that we're going to add the pod and the main container related fields in the top level of the runner spec?
apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner
metadata:
name: example
annotations:
iam.amazonaws.com/role: role-arn
spec:
repository: summerwind/actions-runner-controller
serviceAccountName: example
image: your/runner:latest
resources:
limits:
cpu: "1"
requests:
cpu: "0.5"
This can still be evolved to support additional containers like this(again, a dedicated issue should be created for this. I'm writing this just for sync up):
apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner
metadata:
name: example
annotations:
iam.amazonaws.com/role: role-arn
spec:
repository: summerwind/actions-runner-controller
serviceAccountName: example
image: your/runner:latest
resources:
limits:
cpu: "1"
requests:
cpu: "0.5"
sidecarContainers:
- name: "fluentd"
image: ...
- name: istio-proxy
image: ...
Yeah, again it's on feature request.
ok, so final design would be addressing my PR.
apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner
## this metadata will be inherited by pod, so pod will have the same labels/annotations as runner metadata
metadata:
name: example
annotations:
iam.amazonaws.com/role: role-arn
### pod will have the same spec as runner spec does. except container field because of
Currently the controller watches the runner's pod state and restarts the pod after actions' job is completed. This is depends on the container name. If we introduce a Pod Template, the restart process could be difficult because the container name will be dynamic value. @summerwind comment
spec:
repository: summerwind/actions-runner-controller
serviceAccountName: example
image: your/runner:latest
resources:
limits:
cpu: "1"
requests:
cpu: "0.5"
sidecarContainers:
- name: "fluentd"
image: ...
- name: istio-proxy
image: ...
and if you have a valid use case for adding an additional field, please open separate PR. need you approve @summerwind @mumoshu as maintainer of this controller
hi guys, i would prefer support full pod spec at the beginning.
the main reason is how will the user custom the runner image/container is unknown. people may want to add iam role, service account, security context, resources req/limit, mount configmap, pod affinity , node selector, etc. etc. i don't think you should care about it at all.
instead, you're clear what is needed to allow runner-controller manage the runner pod. for example, manage the runner lifecycle, inject Env for registration token, etc. so i'd suggest we define the contract, like annotation which is quite standard way in k8s. in implementation, we need to have proper merge logic something like below:
@missedone I think we can agree on that we're going to (eventually) support all the pod spec as feature requests come and go.
The point of not using the pod spec "now" is that it gives users misconceptions as if it supports full pod spec. Please also see my comment for the context.
Once we get to the point where our "replicated" pod template spec in the Runner API becomes feature-complete, I believe that it's fine for us to port it for the full pod spec support, as you proposed.
well, there comes concrete cases from my side:
so given the list above, it has already covered most of the pod spec, why not just have full pod spec support now?
also, from users view, if we have the above support, it easily make people think pod spec would be supported as well, but they might be frustrated when they just figured out it's not truth for the extra fields, then have to raise the feature request, and wait till the feature available.
@missedone Thanks! I thought we had no conflict at all, but maybe that was my misconception?
Just to be clear - are you asking to add full support for the pod template spec from the beginning?
Yes to me, since I already see this is coming.
then have to raise the feature request, and wait till the feature available.
Yeah I don't like doing that myself either. I agree with you from "the user's perspective".
But.... who has the bandwidth to complete the implementation in oneshot? Are you willing to contribute the full implementation, or keep submitting multiple pull requests until then it completes, yourself? Then I think that's fine.
I don't think anyone can "commit" to afford their spare time until it completes. That's why I've suggested the gradual-enhancement plan.
yes i'm willing to contribute. but yes can't commit my all spare time.
could you list what's the potential effort to have full pod spec support with embedding core v1.PodTemplateSpec
in RunnerSpec just as what @alexandrst88 did? i may oversee which made me think it a simple task.
i think this is good opportunity to me for learning deeper of kubenetes and github actions.
thanks
Yeap, as I said before we can just use PodTemplateSpec because we need to delete containers fields from CRD.
Kuberenetes api validation force you also specify container fileds :)
lol, it seems like here elastic/cloud-on-k8s#1822, they removed validation from CRD.
So I have to go through most populars fields and reimplement them :)
I can start from https://github.com/summerwind/actions-runner-controller/issues/5#issuecomment-600371555, @missedone propose and add some one. Will try to finish to EOW, thanks
So the full list comes here https://godoc.org/k8s.io/api/core/v1#PodSpec. as metadata will be inherited from Runner's
@missedone i've added most used podspec https://github.com/summerwind/actions-runner-controller/pull/7/commits/0cfbffaac5b4cb86e5624bb24e4e3b38c8fa137c for now. So i need just to handle merge logic.
I believe this is now mostly done via #7.
Please open separate issues if there're any missing fields or any kind of improvements, so that we can better track each of them.
Thanks again to everyone involved, especially @alexandrst88 for the pull request!
thanks you guys.
Hi! Thanks, for this cool tool. I have few questions, do you have a plan to add pod annotations and servicaccount to runners?
I'd like to build docker images and push them to ecr or have ability to access k8s cluster api for example as use case.
If nope, I could work on this, thanks