Open polarstack opened 1 year ago
Hello @polarstack,
That's an interesting idea. IMO it's even a must have.
Looks like we should parse the tls part of the ingress resource but we need to dig deeper.
Hello @polarstack,
Quick update : we now support tls
section of Ingress
in 1.5.5. Don't hesitate to test it by yourself.
Keeping this issue open because we still need to document (and test) the cert-manager integration.
Hi,
The secret loading is a start but not enough for the webapp challenge.
When you deploy a new Ingress using let's encrypt (annotation cert-manager.io/cluster-issuer: letsencrypt
) and the secret does not exists, the certmanager spin a new nginx pod and a new ingress listening only on port 80 for the let's encrypt challenge :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0,::/0
generateName: cm-acme-http-solver-
labels:
acme.cert-manager.io/http-domain: "123456789"
acme.cert-manager.io/http-token: "987654321"
acme.cert-manager.io/http01-solver: "true"
name: cm-acme-http-solver-b2n7j
namespace: hello-world
ownerReferences:
- apiVersion: acme.cert-manager.io/v1
blockOwnerDeletion: true
controller: true
kind: Challenge
name: lets-hello-world-bw-1-1580467882-4097216534
uid: 389d03bc-9634-4641-8f6c-20c2bb07c647
spec:
ingressClassName: nginx
rules:
- host: hello-world.example.com
http:
paths:
- backend:
service:
name: cm-acme-http-solver-lglzx
port:
number: 8089
path: /.well-known/acme-challenge/qRh5NAZCNVFnq0qkNQx3vGDHhPbBUN0n5Cjm8Hlcc_A
pathType: ImplementationSpecific
So we have 2 Ingresses for the same domain ; one with path: /
and one with the .well-known path seen above.
In the Nginx Ingress Controller, both Ingress are merged into a server directive :
[...]
server {
server_name hello-world.example.com ;
listen 80 proxy_protocol ;
[...]
location /.well-known/acme-challenge/qRhcNAZCNVFnq9qFNQx3vGDHhCbBUN0S5CjZ8Hlcc_A/ {
[...]
set $proxy_upstream_name "hello-world-cm-acme-http-solver-lglzx-8089";
[...]
}
[...]
location / {
[...]
set $proxy_upstream_name "hello-world-hello-world-service-8888";
[...]
}
}
For what I could found in your generated config files, I only get the / location.
location / {
etag off;
set $backend1069 "http://hello-world-service.hello-world.svc.cluster.local:8888";
[...]
}
What's needed and why ? Hi All
I see the certbot-dns-* examples which could cover at least the second use case (e.g. ../examples/certbot-dns-cloudflare/docker-compose.yml), but as far as I understands it needs you to mount the "certs" Volume to bunkerweb, scheduler and a custom certbot container with the corresponding config. But not sure how I would implement that on Kubernetes. Using Kubernetes Secrets and Ingress Annotations would make it more natively on that Integration.
Implementations ideas (optional) The Documentation for cert-manager is here: https://cert-manager.io/docs/ But the Installation and Configuration of cert-manager can be out of scope
cert-manager stores the key and crt in a Kubernetes Secret:
On the Ingress the mapping happens in the annotations section:
Not a specialist as I'm still learning, but I guess the Ingress Annotation triggers cert-manager which then stores the crt/key as a secret. Finally the ingress controller (e.g. Traefik) picks it up and deploys/configures the TLS termination. Maybe it's also triggered by the Ingress Controller itself. See for example the official Kubernetes NGinx Ingress Chart: https://github.com/kubernetes/ingress-nginx/blob/afd1311f8529c21fdf6621bf683bec814e698f1d/charts/ingress-nginx/templates/admission-webhooks/cert-manager.yaml
As one can have multiple issuer, I would suggest to leave that as a matter of cert-manager and define the secret only:
Finally the BunkerWeb Scheduler(?) would pick up the secret and store it in /certs/ like it does for example on http, server-http, modsec etc. with the ConfigMap Feature.
Hopefully I was able to explain the need simply, otherwise please let me know if I should elaborate. If you think this is an edge case and doesn't map your Roadmap, don't worry about it and close the issue :-)