Open chrissound opened 7 years ago
I'm also interested to know if this is possible. Or perhaps we can have the node-port that the ingress maps to then create a secondary service of type LoadBalancer?
TL;DR We use DNS failover. In case the GCP L7 Load Balancers fail our DNS will failover to the Service IP. Google Load Balancers have issues more than you might think and we've found this is the best way to squeeze another 9 or so out of our uptime.
I've just confirmed that a secondary service of type LoadBalancer works fine. Here's how I will use this in my configuration...
apiVersion: v1
kind: Service
metadata:
name: echoserver-failover
namespace: echoserver
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
- port: 443
targetPort: 8081
protocol: TCP
name: https
type: LoadBalancer
selector:
app: echoserver
Then in my deployment I'll run two containers:
We run node.js web servers so it's quite easy for us to mount the secret created by kube-lego to a directory on the https container and use the same certs to handle termination at the service level. I think this should be equally easy to do in an apache or nginx setup.
It mentions it needs to be a NodePort according to https://github.com/jetstack/kube-lego/tree/master/examples/gce:
Is this really the case though?