Closed Devin-Holland closed 10 months ago
I'd like to help implement this if maintainers think it is worth it!
There are several pieces to this issue:
For this specific issue, secret generation plugins seem like a better solution.
@pwittrock: Gotcha. The first and second pieces would be useful for us at the moment. The secret setting not so much since we can just continue using the k8s secrets. Are there issues open for those pieces?
@pwittrock Using the secret generation plugin doesn't quite work here. That would require us to put the secret value into a file in the repository for kustomize to generate the secret off of. Unless I'm misunderstanding what you're meaning.
@pwittrock @TheOriginalAlex I think we could start to do something along the idea of the following PR
The values.yaml (that would be built on the fly if needed)
apiVersion: v1
kind: Values
metadata:
name: file1
spec:
port: 3306
strategy: Recreate
The service.yaml (Notice how the kind has been changed for this demo in MyService)
apiVersion: v1
kind: MyService
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- port: $(Values.file1.spec.port)
selector:
app: mysql
The kustomization.yaml file: (No vars section)
---
kind: Kustomization
apiVersion: kustomize.config.k8s.io/v1beta1
resources:
- service.yaml
- deployment.yaml
- values.yaml
kustomize build .
The result:
apiVersion: apps/v1beta2
kind: MyDeployment
metadata:
labels:
app: mysql
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
ports:
- containerPort: 3306
name: mysql
---
apiVersion: v1
kind: MyService
metadata:
labels:
app: mysql
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
The examples are check-in here
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
Any progress on that?
/reopen really need tihs :)
@dbazhal: You can't reopen an issue/PR unless you authored it or you are a collaborator.
We are looking at different ways of populating configuration values at runtime. If you could set the values as a separate command before kustomize build
was run, would this solve your issue?
/reopen
@pwittrock: Reopened this issue.
We are looking at different ways of populating configuration values at runtime. If you could set the values as a separate command before
kustomize build
was run, would this solve your issue?
Yeah, any "runtime" way would be great! Separate command, or if it is possible, flag for build command. Any way to add parameters as
-p key=value --parameter complex_var='{"it": ["could", "be"], "even": ["json"]}'
would really help. Without this functionality we're forced to apply some templating system to our kustomization.yaml, what makes kustomize itself less useful(two utilities making the similar functions).
@dbazhal Good to know. We are exploring a general purpose solution to programatic modification of configuration. This is intended as the next generation of kubectl set
, but may be able to address your case as well. It is still early days, but you are welcome to check it out and give feedback if you think it is a good candidate for your use case in addition to the aforementioned kubectl set
.
You can check out the code here: https://github.com/kubernetes-sigs/kustomize/tree/master/kyaml/setters
Rather than templating, it supports publishing setter
commands through a custom per-resource OpenAPI schema. Currently this supports setter scalar values and list values, but not complex values.
We'd need to think about if this is the right solution for setting complex values programatically.
The feature is being actively developed, but the docs aren't quite up to date.
@pwittrock If i understood you correctly, what you talking about is ability to program precise customization for non-standard fields of objects(not commonly used .metadata.name or .metadata.namespace). So in this case if i want my crd to be customizable, i have to specify exaclty what fields should be modified. Something like spec.customfield=desired-value
What i imagine is more really a templating system. Not for modifying fields of structures that represent processed objects, but for modifying it's definition text, and working with it like with text.
In my opinion, users get much more freedom when they are able to customize whatever right inside the object. For example, this is how my deployment for versioned blue-green rollouts would look like
kustomization.yaml:
...
parameters:
- name: TAG
type: string
default: latest
nameSuffix: "-$(_parameters.TAG)"
...
deployment.yaml
kind: Deployment
metadata:
name: myservice
...
spec:
selector:
matchLabels:
app: myservice
tag: "$(_parameters.TAG)"
template:
metadata:
labels:
app: myservice
tag: "$(_parameters.TAG)"
spec:
containers:
- name: hello-kubernetes
image: "myregistry.com/myproject/myservice:$(_parameters.TAG)"
service.yaml
...
kind: Service
metadata:
name: myservice
...
spec:
selector:
app: myservice
tag: "$(_parameters.TAG)"
...
And what i should do in my cd pipeline is almost as simple as
kustomize build -p tag=v202003101144
kubectl apply
That is easy way to have all-mighty customization system for my application, that is not limited with types of objects that i deploy. Adding new customization point is as simple as adding new parameter and putting it wherever i want to. That fact that i'm required to declare variable as "parameter" creates a semblance of fixed "contract" that says what and how could be customized, which i don't have if i come straight to templating definitions themself.
Thought about it again, and i understand that both ability to parametrize structures and ability to parametrize text are required :) In my everyday work we are already using both functions, but implemented such customization in python + jinja + openapi schemas and it is not much flexible. We have variables declared with openapi schema. Same schema is used to validate variables. This variables are further used two ways: to parametrize definition's text(just rendering jinja templates), and to customize structures deserialized from that text. In second case it's literally python code that says something like
if 'key' in options:
final_object.spec.param = options['key']
In this situation i have two problems: 1) openapi schemas can't easily validate complex structures with optional fields 2) for every new object i am forced to write new python code.
I know helm can do both things pretty well, but using helm just for templating seems to me as overkill :)
@dbazhal What we have proposed looks very close to your first example, but requires the parameter to be defined some place.
Right now it is only implemented as a library to be included in other tools. kpt is one such tool that embeds the functionality and has some documentation on how to use it: https://googlecontainertools.github.io/kpt/cfg/create-setter.html
If and where we would embed the library as part of the project tooling is TBD.
Ok, I got it. Thank you for your reply :)
On Tue, Mar 10, 2020, 19:29 Phillip Wittrock notifications@github.com wrote:
@dbazhal https://github.com/dbazhal What we have proposed looks very close to your first example, but requires the parameter to be defined some place.
Right now it is only implemented as a library to be included in other tools. kpt is one such tool that embeds the functionality and has some documentation on how to use it: https://googlecontainertools.github.io/kpt/cfg/create-setter.html
If and where we would embed the library as part of the project tooling is TBD.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/kustomize/issues/1113?email_source=notifications&email_token=AAQGHBR36UH544AWFNU43DDRGZTFXA5CNFSM4HPJKTP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEOMEPHA#issuecomment-597182364, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAQGHBQ6L5KKCYO72ZAGTGDRGZTFXANCNFSM4HPJKTPQ .
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
Would really appreciate this feature, is it not going to happen?
@travisf will the approach suggested above not work? if not, I'd love to know more about why to better understand your needs.
The approach suggested above would work in my case but could be easier if we could use it directly in the cli.
We want the CI pipeline to docker build
and kubectl apply
every commit. This means we need to apply labels, tags and namespaces based on current commit/ref.
COMMIT_SHA=`git rev-parse --short HEAD`
BRANCH_NAME=`git rev-parse --abbrev-ref HEAD`
kubectl apply -k . -vars commit_sha=$COMMIT_SHA branch_name=$BRANCH_NAME`
seems quite convenient (and a bit simpler than using kpt
).
Another idea which I borrow directly from Github Actions would be to be able to execute some expressions directly in kustomization.yaml
files.
Something like:
env:
commit_sha: ${{ git rev-parse --short HEAD }}
branch_name: ${{ git rev-parse --abbrev-ref HEAD }}
resources:
- deployment.yaml
I'm not sure that the proposed solution will work that well for my use case.
When I bring up my environment, not every resource in my deployment is a Kubernetes resource, for example, bringing up an environment requires first deploying a Kubernetes cluster, which obivously is not a Kubernetes resource and so can't be deployed using kustomize, I use terraform to bring up my Kubernetes cluster on GKE. But that's not the only non Kubernetes resource, another resource is GCP static IP addresses, I also allocate them using terraform (though none of this is specific to terraform, it could just as easily be done with the gcloud
command without changing how I use kustomize). But then those IP addresses need to be set in loadBalancerIP
fields of my Kubernetes Service
resources. I don't know what those IP addresses are until they are created, so they are dynamic.
What I don't want to do is need to commit those IP addresses into my git repo, because the act of tearing down and rebuilding my environment (something done quite frequently for dev/stage/test environments) will cause that IP address to change, I don't want to have to update my configuration files with new IP addresses every time I decide to rebuild those environments.
From the sounds of the solution proposed, to make this work, I would first have to deploy the services with no IP address configured (which means the platform will dynamically select an IP address for me). Then I would have to run an additional command to update the IP address to my configured static ip address, eg via a shell script. This works when you deploy initially. But it doesn't work when you redeploy (eg, as part of a normal rollout of changes), because when you redeploy, you're going to temporarily reset the service back to having no IP address, until your shell script then runs that sets the IP address back correctly. During that time, your service is going to have the wrong IP address, so it will cause an outage. And it's far worse for some resources, for example, using the GKE global load balancer ingress controller, where reconciliation is slow and it can take 15 minutes for changes to apply, I've found doing a change and then immediately changing it back, you can end up with the temporary changes being what gets used for 15 minutes before the change back is picked up and applied.
What was the outcome here? I'm looking for functionality similar to the original post from @Devin-Holland.
@webster-chainalysis: You can't reopen an issue/PR unless you authored it or you are a collaborator.
/reopen
Was there a way developed to set variables at kustomize build
time?
Is there any update related with this issue? Is not that a requirement for some cases? Passing some parameters to build would be a nice feature.
We are still planning to bring setter from kpt to kustomize.
/reopen
I've used this in GitHub Actions
uses: microsoft/variable-substitution@v1
with:
files: "k8s/base/deployment.yml, k8s/base/config.yml"
env:
spec.template.spec.containers.0.image: ${{ steps.poop.outputs.IMAGE_VARIABLE }}
data.clientSecret: ${{ secrets.CLIENT_SECRET }}
data.keyCloakClientSecret: ${{ secrets.ANOTHER_SECRET }}
data.sentryDsn: ${{ secrets.YET_ANOTHER_SECRET }}
It would be pretty cool if i could do something like --build-arg <jsonpath>=<value>
.
Then i don't have to worry about configuring a tool (KPT) within the tool (Kustomize). "Create-setter? Use-setter? View-setter? Delete-setter? Setter this, setter that, no thank you! Jsonpath bro!"
@captainrdubb: You can't reopen an issue/PR unless you authored it or you are a collaborator.
/reopen Trying to use kustomize as it's needed for a tool we're using, but can't integrate it into our CI pipeline or our Infrastructure as Code projects. It seems that setting variables from the command line would enable these use cases.
@JethroMV: You can't reopen an issue/PR unless you authored it or you are a collaborator.
So how does one pass runtime parameters to kustomize build
command? Was a feature like this ever added?
My use case is that deployments are based on a git branch name and so I need to dynamically set the labels
, annotations
and even the name
of some resources. Currently I am achieving this using YQ, but it would be great to do it all through kustomize...
As this is closed is there a replacement issue that could be linked?
I think the kustomize maintainers have told us the correct solution here is to use helm
I think the kustomize maintainers have told us the correct solution here is to use helm
The reason why im using Kustomize instead of Helm is because i'll like to avoid template hell. I came here looking for some way to merge a configmap (aka set variable from cli). Not interested in fully featured templating
/reopen
Still interested in this feature or something similar.
@Devin-Holland: Reopened this issue.
This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted
label.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
This is one of the reason why we use Helm instead of Kustomize.
/kind feature
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
I am using Azure DevOps and I want to be able to use a pipeline variable to set a password value so that the password is not in plain text in a secret file. Is there a way to set a Kustomize variable from the command line when running kustomize build or is the only way currently to put the password value in the source code?
For example something like:
kustomize build -var my-password=1234