Open maxstepanov opened 3 months ago
Hi @maxstepanov, just to make sure, what do you mean by unset the spec.replicas
field?
Also, what you want is generating/deploying a HorizontalPodAutoscaler
resource, right? If that's the case, why not having your hpa
type/provisioner generating this manifest that the Dev could ask for in their Score file?
Hi, @mathieu-benoit, when using HPA it is recommended to remove the spec.replicas
field in Deployment. It's to avoid conflicts between manual scaling and HPA-controlled scaling. When kubectl applies the Deployment the replicas will be reset to spec.replicas
value. Some pod might start to shutdown before HPA can react to adjust the replicaset to the desired value.
The default value for spec.replicas
is 1 so i'm not sure why it's set explicitly by score-k8s
.
That's what brought me to suggest the JSON patches in provisioners.
Deployment can be patched by the provisioiner itself then i don't need to patch the value from command line and leak this aspect outside. I found this approach very useful in my implementations.
Like so:
- uri: template://default/hpa-deployment
type: hpa-deployment
manifests: |
- apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ .SourceWorkload }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .SourceWorkload }}
minReplicas: {{ .Params.minReplicas }}
maxReplicas: {{ .Params.maxReplicas }}
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
patches:
- target:
kind: Deployment
name: "{{ .SourceWorkload }}"
patch: |-
- op: remove
path: /spec/replicas
PS: Not sure if Workload Kind
value is available to provisioners but would be nice to have.
Makes sense, thanks for the info.
That's what brought me to suggest the JSON patches in provisioners.
I totally agree with that, it would be very beneficial, to inject or patch Deployment
's field. Example of: "I want to inject securityContext
", etc. Note: I have filed in score-compose
the same ask/approach: https://github.com/score-spec/score-compose/issues/142. CC: @astromechza
The default value for spec.replicas is 1 so i'm not sure why it's set explicitly by score-k8s.
That's a good point: https://github.com/score-spec/score-k8s/blob/main/internal/convert/workloads.go#L212, @astromechza, should we not set/generate this field/value by default?
Fix pr for replicas=1 is up!
I totally understand the need for modifying the converted output of score-k8s, and we have a few places where I've seen yq
or kustomize used to modify the manifests prior to deployment.
I'm not clear really why a "resource" is the answer here though. It might need to be some other syntax and semantics that allows users to modify the workload output - similar to how we might allow score workloads to be converted to jobs/cronjobs or custom CRDs.
@astromechza There are many cases that require Workload patching along side the added manifests. If patches defined somewhere else the implementation of the resource is incomplete in template provisioner.
I have a case where i need to create a Secret and mount it as file. So I created the template provisioner that adds the Secret to output manifests. Next i need to patch the volumes into workload. I don't see any way to accomplish that in template provisioner definition. After all i'd like to have the code in one place.
I can run yq
after score-k8s
generates, detect if resource of this type is used. Read it and patch it with kustomize
but this makes the implementation leak outside and spread it over several components.
At this point it's easier not to implement template resource at all and patch it with kustomize
Component later.
That's what we have been doing so far.
If i can specify JSON patches and template together then provisioner implementation is self-contained and easy to understand. That's why kustomize
added Components at the end and we use them heavily.
This approach would require adding WorkloadKind
to provisioners Input
so provisioner can support different kinds and error out if it's not supported.
Here is one example. The same this with serviceAccounts, PDBs etc...
- uri: template://default/mounted-secret
type: mounted-secret
manifests: |
{{ if not (eq .WorkloadKind "Deployment") }}{{ fail "Kind not supported" }}{{ end }}
- apiVersion: v1
kind: Secret
metadata:
name: {{ .State.service }}
annotations:
k8s.score.dev/source-workload: {{ .SourceWorkload }}
k8s.score.dev/resource-uid: {{ .Uid }}
k8s.score.dev/resource-guid: {{ .Guid }}
labels:
app.kubernetes.io/managed-by: score-k8s
app.kubernetes.io/name: {{ .State.service }}
app.kubernetes.io/instance: {{ .State.service }}
data:
password: {{ .State.password | b64enc }}
patches: |
- target:
kind: Deployment
name: "{{ .SourceWorkload }}"
patch: |-
- op: add
path: /spec/template/spec/volumes
value:
- name: "{{ .SourceWorkload }}"
secret:
secretName: "{{ .SourceWorkload }}"
The second route is not to use score-k8s
templating facilities at all and implement custom CMD provisioners that leave the patches aside for the later stage of kustomize patching. This approach is cumbersome and devalues template provisioners.
I can work on this if there is interest.
I have a case where i need to create a Secret and mount it as file. So I created the template provisioner that adds the Secret to output manifests. Next i need to patch the volumes into workload. I don't see any way to accomplish that in template provisioner definition. After all i'd like to have the code in one place.
Ok cool, this is the use case.
We do already have a solution to this particular thing, although it's not documented particularly well. https://github.com/score-spec/score-k8s/blob/26e22116f2a7562d669256bc1d64e9013a6be346/internal/provisioners/default/zz-default.provisioners.yaml#L39C5-L39C5
The template provisioner returns an output key that looks like secret-thing: {{ encodeSecretRef "my-secret" "my-key" }}
this is a way of encoding Secret Name, and key within the secret.
Then when you mount this as a file in the workload, you use container.x.files[n].content: ${resources.foo.secret-thing}}
it will identify that this is a secret, and will mount it as a volume in the pod.
So your example above can look like this:
Provisioner:
- uri: template://default/mounted-secret
type: mounted-secret
outputs:
reference: {{ encodeSecretRef .State.service "password" }}
manifests: |
{{ if not (eq .WorkloadKind "Deployment") }}{{ fail "Kind not supported" }}{{ end }}
- apiVersion: v1
kind: Secret
metadata:
name: {{ .State.service }}
annotations:
k8s.score.dev/source-workload: {{ .SourceWorkload }}
k8s.score.dev/resource-uid: {{ .Uid }}
k8s.score.dev/resource-guid: {{ .Guid }}
labels:
app.kubernetes.io/managed-by: score-k8s
app.kubernetes.io/name: {{ .State.service }}
app.kubernetes.io/instance: {{ .State.service }}
data:
password: {{ .State.password | b64enc }}
And your workload could look like
...
containers:
example:
...
files:
- target: /mnt/password
content: ${resources.secret.reference}
resources:
secret:
type: mounted-secret
That's what I was afraid of when choosing the Secret as example. What about serviceaccounts? Pdbs? I don't want to expose this to the developer at all. Add and patch is the regular case. I wish this tool can meet me in the middle instead of forcing to implement CMD provisioners with post processing hooks. Oh well...
On Wed, Sep 4, 2024, 20:06 Ben Meier @.***> wrote:
I have a case where i need to create a Secret and mount it as file. So I created the template provisioner that adds the Secret to output manifests. Next i need to patch the volumes into workload. I don't see any way to accomplish that in template provisioner definition. After all i'd like to have the code in one place.
Ok cool, this is the use case.
We do already have a solution to this particular thing, although it's not documented particularly well. https://github.com/score-spec/score-k8s/blob/26e22116f2a7562d669256bc1d64e9013a6be346/internal/provisioners/default/zz-default.provisioners.yaml#L39C5-L39C5
The template provisioner returns an output key that looks like secret-thing: {{ encodeSecretRef "my-secret" "my-key" }} this is a way of encoding Secret Name, and key within the secret.
Then when you mount this as a file in the workload, you use container.x.files[n].content: ${resources.foo.secret-thing}} it will identify that this is a secret, and will mount it as a volume in the pod.
So your example above can look like this:
Provisioner:
- uri: template://default/mounted-secret type: mounted-secret outputs: reference: {{ encodeSecretRef .State.service "password" }} manifests: | {{ if not (eq .WorkloadKind "Deployment") }}{{ fail "Kind not supported" }}{{ end }}
- apiVersion: v1 kind: Secret metadata: name: {{ .State.service }} annotations: k8s.score.dev/source-workload: {{ .SourceWorkload }} k8s.score.dev/resource-uid: {{ .Uid }} k8s.score.dev/resource-guid: {{ .Guid }} labels: app.kubernetes.io/managed-by: score-k8s app.kubernetes.io/name: {{ .State.service }} app.kubernetes.io/instance: {{ .State.service }} data: password: {{ .State.password | b64enc }}
And your workload could look like
... containers: example: ... files:
- target: /mnt/password content: ${resources.secret.reference} resources: secret: type: mounted-secret
— Reply to this email directly, view it on GitHub https://github.com/score-spec/score-k8s/issues/25#issuecomment-2329579601, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAESDH7CZVTSNXLEVZWXD2DZU44ZPAVCNFSM6AAAAABNDUO6J2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMRZGU3TSNRQGE . You are receiving this because you were mentioned.Message ID: @.***>
Yeah I agree that it doesn't handle other kinds of links between resources and workloads. In general we should look for ways to have the workload consume a resource, rather than the resource submitting arbitrary patches to workloads.
As seen on #29.
@astromechza The same thing we do every day, add objects and patch references. If the reference in the workload like, service accounts, pod disruption budget, topology spread i must be able to patch them. In order to generate the patch i need to know the object Kind to provided the appropriate patch.
Our Developers, must be able to specify the service account, topology spread, pod disruption budget etc in score-k8s via resource! score-k8s doesn't support this pattern! Only for secrets and volumes!
I'm trying to avoid forking this, that's why i'm begging to add the value. so i can re-implement templating in CMD provider that also leaves patches out-of-band(score is not aware of them).
then i can use wrap and dump pattern. Where i wrap the score-k8s execution in a script that picks up the patches left by the my CMD provisioners and apply them to manifests.yaml
This is such a common devops pattern I'm really surprised to get so much push back on this.
There's a suggestion that we should be able to access workload annotations or context which describes the output "shape" of the workload since without this the patch may be invalid or unnecessary.
I've been looking at other Score implementations and similar issues and conversations and I think I have a suggestion here.
Similar prior art:
Some score-compose users add a compose-mixins.yaml
which gets applied after their main compose spec via docker compose -f compose.yaml -f compose-mixins.yaml
. This mixin gets applied directly on top of the workload.
Humanitec has a resource type called a workload
which can provide update
outputs which are applied as json patches to the generated Humanitec "Deployment Set". See https://developer.humanitec.com/platform-orchestrator/reference/resource-types/#workload.
So I think we can probably do a similar thing here: Currently the provisioners have manifests
as part of their execution output, these are new manifests to add to the output but do not get applied to the workload's manifest. We could add a patches
output, which gets applied using rfc6902 to the workload output itself after being converted.
After converting the workload, we search all listed resources and apply any patch outputs in lexicographic order based on the resource name.
This mechanism could be re-used by most of our other Score implementations in order to apply meaningful mixins.
We currently pass the workload name through to the provisioner, but the provisioner will need to understand more about the structure of the output kubernetes spec in order to generate a meaningful patch. This could easily be done through the params or annotations on the resource for now while we investigate this further.
Example provisioners:
As an example, here's a workload which requests a k8s-service-account
resource type. The application running inside the workload will expect this to provider the urls and k8s token needed to access a kubernetes API. Even if the workload was running on docker compose via score-compose or some other score implementation.
score-k8s implementation:
- uri: template://example-k8s-service-account
type: k8s-service-account
manifests:
- <service account manifest here if it needs to be generated>
patches:
- op: add
path: /spec/serviceAccountName
value: my-service-account
score-compose implementation:
- uri: template://example-k8s-service-account
type: k8s-service-account
patches:
- op: add
path: /services/my-workload/environment/KUBERNETES_SERVICE_HOST
value: localhost
- op: add
path: /services/my-workload/environment/KUBERNETES_SERVICE_PORT_HTTPS
value: 6443
- op: add
path: /services/my-workload/volumes/...
value: < a kubernetes token or something here>
Or something similar to that. This is also generic enough to work for other workload kinds in the future too.
Thoughts @mathieu-benoit ?
I'll see if I can put up a PR for this soon
Yes, I think patches
is a really good approach. I can add/inject securityContext
, etc. or patch any field.
Now, I'm wondering with your example, in the Score file, we will need to add a resource dependencies k8s-service-account
? If that is, I think that's not convenient. Because what about I patch any fields, for example I want to inject securityContext
, how to inject that? Create a type workload-security-context
? Not very user friendly... Could we have like in Humanitec a workload
type, which is always resolved and that someone can implement/override?
Note: same note/remark for both score-k8s
and score-compose
.
@mathieu-benoit Or just create a type workload an add your custom provisioner for that.
resources:
workload-patch:
type: workload
I don't think we should add automatic workload resources that suprise users.
I'm triyng to implement HPA provisioner and i'm facing the issue with Deployments
spec.replicas
set to 1. Documentation mentions this but doesn't explain why. I'd like to unset it and the only supported option seems to be via patching in cli argumentsI've toyed with a similar tool and ended up implementing Json patches in provisioner definitions. These returned along side with manifests and run at the end of the rendering pipeline. This way provisioner functions similar to kustomize's components. Makes provisioners more self contained and removes the need to monkey-patch with cli arguments or kustomize step later. What do you think? Was this discussed already somewhere?