Open jhuntwork opened 2 years ago
Through some initial investigations, I was trying to see if I could remove the ALB above as well, and just use an NLB. The issue is that Skipper only listens on one port, and we need a cheap/easy place to do HTTP redirects, which is what the ALB provides.
Currently it seems that there is no support to create a TG that points to another, abitrary endpoint. I'd have to figure out if we can find the listening endpoints of an ALB and set up the TG to point at that.
@jhuntwork the recent skipper version allows you to specify a redirect listener. Please also see https://github.com/zalando/skipper/issues/1694 and we aim to fix it asap
Awesome, thanks!
Our current implementation is based on an internal container that starts 2 skipper processes: 1) skipper redirect listen on port 9998 (https://github.com/zalando-incubator/kubernetes-on-aws/blob/dev/cluster/manifests/skipper/deployment.yaml#L56-L59 and https://github.com/zalando-incubator/kubernetes-on-aws/blob/dev/cluster/manifests/skipper/deployment.yaml#L85-L86) 2) skipper-ingress listen on port 9999 serving everything else
basically we have a docker container that starts run.sh, which has this:
if [ -n "$HTTP_REDIRECT" ]
then
(skipper -address=:9998 \
-support-listener='' -metrics-listener='' \
-inline-routes='redirect: * -> redirectTo(308, "https:") -> <shunt>; health: Path("/healthz") -> status(204) -> <shunt>;' \
-access-log-disabled) &
fi
# skipper with all args, skipper will be pid 1, because we replace sh and skipper handles shutdown
exec "$@"
I hope it makes sense and you can easily adapt.
Yeah looks straightforward, thanks. That run.sh file isn't included in your public container, right?
No run.sh is custom
cheap/easy place to do HTTP redirects, which is what the ALB provides
ALB for HTTP redirects is not cheap IMO :)
Please also note that NLB (or rather existing NLB+Skipper setup) will not support HTTP/2 with TLS offloading on NLB. We have a draft https://github.com/zalando/skipper/pull/1868 to support h2c in Skipper that may improve on this.
@AlexanderYastrebov the interesting question is about the 3rd TG with 3rd listener to support in this case SSH access for git
What is the preferred mechanism to define additional listeners and target groups
Two target groups for NLB is quite a new feature (#435) currently configured with startup flags.
Maybe the best option would be to manage NLB outside of the controller by e.g. a separate Cloud Formation stack. We may think though to improve controller stack to export target groups https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/walkthrough-crossstackref.html
Maybe the best option would be to manage NLB outside of the controller by e.g. a separate Cloud Formation stack.
We can definitely do that, we have to do that for some other resources. But it would be nice if we can get the aws resource we need just by defining a kubernetes resource. I think the only thing missing at this point is the support for a third listener/TG.
I started working a while ago on an implementation for this, but got sidetracked with other things. I need to look at this again, but before I do I just want to ask if there's any recent changes or thoughts about design that would impact this potential feature?
@jhuntwork I think this project is low traffic normally and there was no significant change.
I have a use case where an NLB needs 3 listeners, ports 80, 443, and 22. The picture below shows what I have set up manually and would like to achieve with kube-ingress-aws-controller.
The ALB in the picture is already currently deployed and managed by the ingress controller. I would like to also automatically provision and manage the NLB similarly, but it needs the third pass-through TCP listener and target group.
I am willing to provide the additional features through a PR, but I have a few questions: