Open lswith opened 6 years ago
Recently we added JSON patch support, which is a good solution for this problem. Take a look at our example https://github.com/kubernetes-sigs/kustomize/blob/master/examples/jsonpatch.md This feature is available from HEAD currently, we will release a new version soon.
ah right. Just create a JSON Patch and then use that to edit the build.
I'm sorry @Liujingfang1, I read the example, and it does not seem like a suitable solution to what is, as @lswith mentioned, a common use case. I was thinking of incorporating Kustomize into our workflow as a low-overhead alternative to creating a helm chart, but a chart seems to be a much more elegant alternative at this point. Any opprotunity for native ingress variables in Kustomize?
I agree: being able to patch the Ingress host value is super useful, and it would be preferable to be able to do it with a strategic merge. I am seeing a lot of feature requests closed with "use a JSON patch", without much consideration of the use cases.
Same for me... 👍Could we reopen this one?
Also commenting in hopes that this get looked at as something that should be supported natively. Pushing jsonpatches as the solution doesn't seem viable for all use cases. For obscure things that aren't done often sure. But configuring an ingress is quite common, so having a cleaner way to kustomize that would be extremely beneficial.
@davinkevin's referenced commit (https://github.com/davinkevin/Podcast-Server/commit/9ca4be54606925e3a8a4213384bc8de84ce7fbbf) illustrates the problem very nicely — how do I make three different variants with three different ingress rules applying to three different hosts? Here's how I'm currently solving the problem — can y'all see how this is inelegant?
Here's my base:
[deployment and service omitted]
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: broadcaster
spec:
rules:
- host: $(SERVICE_NAME).example.com
http:
paths:
- path: /
backend:
serviceName: broadcaster
servicePort: 80
resources:
- broadcaster.yaml
And here's what I'm using to produce ingresses for broadcaster-bulbasaur.example.com
, broadcaster-charmander.example.com
, and broadcaster-squirtle.example.com
.
resources:
- ./bulbasaur
- ./charmander
- ./squirtle
resources:
- ../../broadcaster
nameSuffix: -squirtle
patchesJson6902:
- path: hostname.yaml
target:
group: extensions
kind: Ingress
name: broadcaster
version: v1beta1
- op: replace
path: /spec/rules/0/host
value: broadcaster-squirtle.example.com
resources:
- ../../broadcaster
nameSuffix: -charmander
patchesJson6902:
- path: hostname.yaml
target:
group: extensions
kind: Ingress
name: broadcaster
version: v1beta1
- op: replace
path: /spec/rules/0/host
value: broadcaster-charmander.example.com
resources:
- ../../broadcaster
nameSuffix: -bulbasaur
patchesJson6902:
- path: hostname.yaml
target:
group: extensions
kind: Ingress
name: broadcaster
version: v1beta1
- op: replace
path: /spec/rules/0/host
value: broadcaster-bulbasaur.example.com
resources:
- broadcaster.yaml
vars:
- name: SERVICE_NAME
objref:
kind: Service
name: broadcaster
apiVersion: v1
resources:
- ../../broadcaster
nameSuffix: -squirtle
resources:
- ../../broadcaster
nameSuffix: -charmander
resources:
- ../../broadcaster
nameSuffix: -bulbasaur
Much cleaner.
Any chance to see that addressed in a future version?
When you have a bunch of subdomains in an ingress, using json patch is not acceptable. It works, but referencing the hosts by index leads to odd errors if someone changes the order of the hosts in the original ingress. So having something like we have for the images would be so nice...
In my team we ended up piping kustomize build
with a sed
to address that more conveniently. But it's so sad something easy is not supported out of the box for such a common use case.
I ran into a different use-case for the same feature yesterday:
At work we are going to have a local k8s setup on each machine. With the old VM setup we customize the hostname to have a $USER
suffix. That hostname is then broadcast via mDNS on the internal network so people other than the person at the machine can test solutions from their own machine (i.e. instead of solution.local
you'd have solution-$USER.local
).
I am working on an mDNS ingress hostname broadcaster (currently only works with microk8s), and being able to locally apply various manifests containing ingresses without having to post-process them would be very helpful.
I empathize with the Kustomize team, maybe this could be addressed in k/k with something like a spec.baseHostname
field?
Please add support for suffixes. For instance, I have a lot of ingress with hostname suffix .aws-test.example.com I need add an overlay for different environments or zones for get ingresses with hostname suffix .aws.example.com or .gcp.example.com
Without features like this, being able to essentially string interpolate on fields, I'm not really sure how Kustomize really fits into the ecosystem. For me I was using it because Helm is way too complex for simple projects, Kustomize covers 99% of my needs - except that I can't configure hostnames of ingress routes.
I know there is a design goal not to make this a templating project, but without some kind of basic templating/interpolation, does this not greatly limit the potential use cases?
JSON Patch - clever, but grody for simple use cases.
OK, I rescind my +1 on this. Using patches my specific usecase is actually very easy to solve by templating the patch files and creating a templated kustomize overlay (using ansible):
ingress-patch.json.j2
[
{
"op": "replace",
"path": "/spec/tls/0/hosts/0",
"value": "{{ ingress.name }}-{{ ansible_env.USER }}.local"
},
{
"op": "replace",
"path": "/spec/rules/0/host",
"value": "{{ ingress.name }}-{{ ansible_env.USER }}.local"
}
]
kustomize.yaml.j2
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
patches:
{%- for ingress in operations.ingresses %}
- path: {{ ingress.name }}-ingress-patch.json
target:
group: networking.k8s.io
version: v1beta1
kind: Ingress
name: {{ ingress.name }}
{%- endfor %}
I'm also beginning to think that implementing something like this in kustomize would erode some of its simplicity that I have grown really fond of.
Ugh, I'm on my first day of Kustomize and foiled by this fundamental challenge. I have different domains for each environment. This seems like a basic use case.
Options:
1) Copy/paste the entire file and move on.
2) patchjson using paths like path: /spec/rules/0/host
which seems very fragile
3) kustomize vars but they seem to only reference other strings. Is it possible to use a constant? If not, why not?
If there is a "bug" here, is it that the elements of "rules" don't have names, so they can't be strategically merged, breaking a basic use case for reusing code with different domains?
Is there some other solution I'm missing?
# SEE: https://kubernetes.io/docs/concepts/services-networking/ingress/
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-xxx.com
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: clusterissuer-selfsigned
spec:
tls:
- hosts:
- xxx.xxx.team
- app.xxx.team
- www.xxx.team
- xxx.team
secretName: tls-xxx
defaultBackend:
service:
name: www
port:
number: 80
rules:
- host: xxx.xxx.team
http:
paths:
- backend:
service:
name: echo1
port:
number: 80
- host: app.xxx.team
http:
paths:
- backend:
service:
name: echo2
port:
number: 80
@MichaelJCole the patches are just as stable as the Ingress API itself, so that shouldn't be any trouble.
I actually ended up creating a kustomize transformer instead (transformers and generators are awesome btw.).
That way an additional patch overlay is not needed.
I'm sure it can be modified to fit your usecase.
#!/usr/bin/env python3
"""IngressTransformer - Modify ingress domain names according to a template
Usage:
IngressTransformer <config-path>
Template pattern:
The template supports the following variables:
{_TLD} last part of the domain name
{_HOSTNAME} everything except the TLD
{_FQDN} the entire domain name
{...} any environment variable
"""
import docopt
import yaml
import os
import sys
def main():
params = docopt.docopt(__doc__)
config = yaml.load(open(params['<config-path>']), Loader=yaml.FullLoader)
template = config['spec']['template']
resources = yaml.load_all(sys.stdin, Loader=yaml.FullLoader)
ingresses = []
for resource in resources:
if resource['apiVersion'] in ['networking.k8s.io/v1', 'networking.k8s.io/v1beta1'] \
and resource['kind'] == 'Ingress':
for entry in resource['spec']['tls']:
for idx, domain in enumerate(entry['hosts']):
entry['hosts'][idx] = transform_host(domain, template)
for rule in resource['spec']['rules']:
rule['host'] = transform_host(domain, template)
ingresses.append(resource)
sys.stdout.write(yaml.dump_all(ingresses))
def transform_host(domain, template):
parts = domain.split('.')
return template.format(**{
**os.environ,
'_TLD': parts[-1],
'_HOSTNAME': '.'.join(parts[0:-1]),
'_FQDN': '.'.join(parts),
})
if __name__ == '__main__':
main()
Place it in ~/.config/kustomize/plugin/APINAME/ingressdomaintransformer
and put a config next to your ingress yaml:
---
apiVersion: APINAME
kind: IngressDomainTransformer
metadata:
name: username-suffix
spec:
template: '{_HOSTNAME}-{USER}.{_TLD}'
As @andsens mentioned, the most flexible way to do any operation in kustomize is writing your own transformer. Meanwhile, kustomize now supports KRM functions as transformer. KRM function is containerized so it will be easier to reuse. Although we are in a lack of documentation about this feature, you can get some example from the test codes.
FYI @andsens. The above example of a python3 transformer is badly broken. I tried to adapt it to my needs. First the docopt is not valid (it always errors out with latest docopt 0.6.2) in the end I had to remove "Template Pattern:" block completely. Then it doesn't check if the 'tls' or 'rules' key really exist and errors out if one of them is missing. And in case of 'rules' you're passing the undefined value of domain which is only populated in for loop for 'tls' but not for 'rule'. so I had to exchange 'domain' with rule['host'].
and lastly you forget the part that you have to add:
transformers:
- <your config file>.yaml
to the kustomization.yaml to make it work.
And last but not least, you might want to add this before calling main() or as first thing in main() as I was running this in a docker image of CentOS7 and had PyYAML issues with invalid characters as the input stream was not utf-8. Note that this requires Python 3.7 or higher:
sys.stdin.reconfigure(encoding='utf-8')
sys.stdout.reconfigure(encoding='utf-8')
sys.stderr.reconfigure(encoding='utf-8')
@marcelser thank you for your notes. Indeed the for loop is badly broken. The docopt works for me, but I can see that I have an additional newline before the "Template patterns:", and testing it on try.docopt.org indeed confirms that the newline is needed.
Thanks for the tip about utf-8, that'll definitely come in handy.
I'd like to be able to dynamically add/remove/edit hosts during deployments. I need to deploy one namespace and ingress per branch, all other things being equal. IMO adding ingresses dynamically should as easy as adding labels, e.g.:
kustomize edit set/add/remove ingress test.xmpl.com
My use case is simple so I can reliably use sed
and/or envsubst
, but I find it strange that the ability to modify an ingress is missing.
If we are pushed to use workaround / templating either via sed
or external transformers, it's most likely something to add natively to kustomize.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
using another (overly complicated, turing-complete, conference-inspiring) templating layer on top of kustomize kind of defeats the whole idea of having kustomize in the first place
See an solution like for images or replicas would be the best possible in my opinion.
I also need to inject different host while run the kustomize based on the name of the namespace. Means the namespaceXXX should be set as part of the ingress host.
Reviewing this long thread, must say not sure what is the solution.
Any light please?
First of all thanks for @andsens for preparing the examples with Python IngressDomainTransformer unfortunatelly for me I have trouble to run it in the simple way, below you will find my version of transformer.
#!/usr/bin/env python3
import yaml
import os
import sys
def main(args=sys.argv[1:]):
host_name = args[1]
resources = yaml.load_all(sys.stdin, Loader=yaml.FullLoader)
ingresses = []
for resource in resources:
if resource['apiVersion'] in ['extensions/v1beta1', 'networking.k8s.io/v1', 'networking.k8s.io/v1beta1'] and resource['kind'] == 'Ingress':
annotations = resource['metadata']['annotations']
annotations['nginx.org/websocket-services'] = host_name
for entry in resource['spec']['tls']:
if 'hosts' in entry:
for idx, domain in enumerate(entry['hosts']):
entry['hosts'][idx] = host_name
if 'rules' in resource['spec']:
for rule in resource['spec']['rules']:
rule['host'] = host_name
ingresses.append(resource)
sys.stdout.write(yaml.dump_all(ingresses))
if __name__ == '__main__':
sys.stdin.reconfigure(encoding='utf-8')
sys.stdout.reconfigure(encoding='utf-8')
sys.stderr.reconfigure(encoding='utf-8')
main()
Place it in to ~/.config/kustomize/plugin/APINAME/ingressdomaintransformer
Next change your kustomize.yaml to;
resources:
- deployment.yaml
transformers:
- |-
apiVersion: APINAME
kind: IngressDomainTransformer
metadata:
name: username-suffix
argsOneLiner: myapplication.example.org
or if you would like separate file for transformers configuration create file ingressDomainTransformer.yaml
apiVersion: APINAME
kind: IngressDomainTransformer
metadata:
name: username-suffix
argsOneLiner: myapplication.example.org
Next change your kustomize.yaml to;
resources:
- deployment.yaml
transformers:
- ingressDomainTransformer.yaml
/remove-lifecycle stale
/remove-lifecycle stale
Just came across this post and can't believe that this is open since 2018. I wanted to drop Helm and go for kustomize but this actually is a show stopper. The idea @iameli provided is actually what I'm looking for. Any chances that this gets some traction again?
Hi, for this kind of stuff, i actually use labels and vars to handle this (can't use replacements for the moment: https://github.com/kubernetes-sigs/kustomize/issues/4099#issuecomment-970279208 ) :
broadcaster/broadcaster.yaml
[deployment and service omitted]
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: broadcaster
spec:
rules:
- host: broadcaster-$(POKEMON).example.com
http:
paths:
- path: /
backend:
serviceName: broadcaster
servicePort: 80
broadcaster/kustomization.yaml
resources:
- broadcaster.yaml
vars:
- name: POKEMON
objref:
kind: Deployment
name: boracaster
apiVersion: apps/v1
fieldref:
fieldpath: metadata.labels.pokemon
And i use the kustomize CLI to set dynamic labels like kustomize edit set label "pokemon:charmander"
Hope it helps somebody :)
Hi. I created a example repository for current available options for managing Ingress by Kustomize. https://github.com/jlandowner/kustomize-ingress
Hope you find it helpful.
Another tool that can be leveraged here is components in conjunction with replacements
and a labelSelector
. It does
not address the concern @xhanin brought up about hostname indices being unreliable, but it does address the use case
that @iameli brought up of:
how do I make three different variants with three different ingress rules applying to three different hosts?
Here are the steps:
app=my-app
):
# base/app-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress1
labels:
app: "my-app"
annotations:
spec:
ingressClassName: nginx
tls:
- hosts:
- "myapp.local"
secretName: my-tls-certificate
rules:
- host: "myapp.local"
http:
paths:
- path: /app1
pathType: Exact
backend:
service:
name: my-app1-backend
port:
number: 5000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress2
labels:
app: "my-app"
annotations:
spec:
ingressClassName: nginx
tls:
- hosts:
- "myapp.local"
secretName: my-tls-certificate
rules:
- host: "myapp.local"
http:
paths:
- path: /app2
pathType: Exact
backend:
service:
name: my-app2-backend
port:
number: 5000
Define a Kustomize component that can be used to DRY-up the replacements:
# components/ingress-dns/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
replacements:
- source:
kind: ConfigMap
name: environment
fieldPath: data.hostname
targets:
- select:
kind: Ingress
labelSelector: "app=my-app"
fieldPaths:
- spec.tls.0.hosts.0
- spec.rules.0.host
Define your overlays, using a config map called environment
to supply the hostname for the environment of each
respective overlay:
# overlays/01-dev/config-environment.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: environment
data:
hostname: dev.myapp.example.com
# overlays/01-dev/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
- config-environment.yaml
components:
- ../../components/ingress-dns
# overlays/01-live/config-environment.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: environment
data:
hostname: myapp.example.com
# overlays/01-live/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
- config-environment.yaml
components:
- ../../components/ingress-dns
Sample output for live:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress1
labels:
app: "my-app"
annotations:
spec:
ingressClassName: nginx
tls:
- hosts:
- "myapp.example.com"
secretName: my-tls-certificate
rules:
- host: "myapp.example.com"
http:
paths:
- path: /app1
pathType: Exact
backend:
service:
name: my-app1-backend
port:
number: 5000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress2
labels:
app: "my-app"
annotations:
spec:
ingressClassName: nginx
tls:
- hosts:
- "myapp.example.com"
secretName: my-tls-certificate
rules:
- host: "myapp.example.com"
http:
paths:
- path: /app2
pathType: Exact
backend:
service:
name: my-app2-backend
port:
number: 5000
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
👍
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/remove-lifecycle rotten
/reopen
@lieryan: You can't reopen an issue/PR unless you authored it or you are a collaborator.
This issue should be re-opened, it is still very much relevant as it still doesn't have any good solution currently.
/reopen
@cprivitere: Reopened this issue.
@lswith: This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted
label.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
I think a reasonably common use case is to swap an ingress's host value:
Can we get a feature to set this?