Open hongkongkiwi opened 4 years ago
The service resource models closely to the AWS NLB resource, so we don't intend to support single LB across multiple service resources. See if specifying multiple service ports helps your use case.
If you are not able to specify both TCP and UDP ports on a service resource of type LoadBalancer
, you can try using service of type NodePort
. The only limitation currently is that the TCP and UDP ports cannot be the same value.
Thanks @kishorj, that's a pretty clear explanation. I will try the NodePort approach.
I was wondering why I can't specify the targetgroup with NLB like I can with the ALB? That would also be a way to resolve this, just specify the same targetgroup for multiple services (such as I can do with ingress).
I also would love to see something akin to alb.ingress.kubernetes.io/group.name
on NLBs (presumably as something akin to service.beta.kubernetes.io/aws-load-balancer-group-name
).
NodePort seems like a nonstarter with an autoscaling cluster - you have to go manually create additional target groups and pin your autoscaling group to each of them every time you add a new service (though honestly this is more of a failing of the way NLB target groups work - ideally you should only need one target group, not one target group per port all with identical instance lists)
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
As far as I'm aware this shouldn't be stale. I'd still love to see this, at least.
@philomory
This controller currently supports NodePort service as well. If you use the NLB-IP mode annotation service.beta.kubernetes.io/aws-load-balancer-type: "nlb-ip"
on services, even it's a nodePort service, the controller will manage the loadbalancer and targetGroups for you automatically.
Also, there have been proposal in upstream to allow dual TCP/UDP.
/kind feature
Mixed-protocol (TCP/UDP) Service is alpha in k8s 1.20, is this the feature ticket to track for support for the MixedProtocolLBService
feature gate?
Or is this for multiple Services contributing to a single NLB, similar to #1707 and #1545?
@TBBle, this is the correct issue for mixed protocol support.
Once the MixedProtocolLBService feature gate is enabled, service of type LoadBalancer with mixed protocols should work fine without further changes with the following limitations -
@TBBle What, exactly, is the limitation on the AWS side that causes this? Doesn't the NLB listener protocol TCP_UDP
cover the case of the same port over both TCP and UDP?
I assume you meant @kishorj with that question.
Per the docs, TCP_UDP
is explicitly for the "same port in TCP and UDP" case.
To support both TCP and UDP on the same port, create a TCP_UDP listener. The target groups for a TCP_UDP listener must use the TCP_UDP protocol.
@TBBle You're absolutely right, I meant @kishorj. My apologies.
@kishorj, what's the cause of the limitation that a Load Balancer service with mixed protocols cannot use the same port for both TCP and UDP? It's definitely supported on the AWS side in the form of the TCP_UDP protocol type. But maybe I'm misunderstanding something here?
@philomory, you are correct, AWS supports TCP_UDP protocol type. In my prior response, I was referring to the current controller code without further changes handling services with TCP and UDP protocols.
As you mentioned, It is possible to utilize the TCP_UDP protocol type supported by AWS to combine matching TCP and UDP ports from the service spec as long as the target ports or node ports for TCP and UDP protocols are same. This is something that we have been considering to add in future releases.
So as of today there is no way to have a listener with TCP_UDP
?
I have try using service.beta.kubernetes.io/aws-load-balancer-backend-protocol: TCP_UDP
but it does nothing. I'm using v2.1.3
.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
I don't think this should be marked as stale, this would still be a valuable feature
/remove-lifecycle stale
Sounds like this is something that'd be good to support, I imagine adding support for TCP_UDP
would probably need to be in a few places, but would a good place to start be model_build_listener.go
?
@Yasumoto, I've included the design details if you are interested
NLB has support for both TCP and UDP listeners and the k8s LoadBalancer
service with mixed protocol is deployed as NLB configuration using both type of listeners. However, due to limitation from the AWS ELBv2 side, listener ports for TCP and UDP cannot be the same. Use cases where both TCP and UDP listeners are used where we are currently not able to provide a reasonable solution -
The AWS ELBv2 has a TCP_UDP type of listener that listens for both TCP and UDP protocol on the same port and this construct is useful for providing solution for mixed protocols in limited cases. This document describe the proposed solution with the limitations.
In case the service spec has the same TCP and UDP ports specified, convert to TCP_UDP type listener during model generation if there exists two ports p1
and p2
in the service.Spec.Ports
such that:
For each (p1, p2) pairs, create a listener of type TCP_UDP instead of separate TCP and UDP listeners.
There are no issues with backwards compatibility. This feature does not require any user action.
Since the target ports for both the TCP and UDP protocols have be the same, the nodePort for instance targets must be statically allocated.
I just submitted a PR that implements this using a similar strategy https://github.com/kubernetes-sigs/aws-load-balancer-controller/pull/2275
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Greetings, I've been tracking this for a while and have become a bit unclear in connecting my use case with what currently is supported, and what will be supported in Kubernetes and in EKS. Apologies in advance if I'm asking this in the wrong thread.
AWS Use Case
Device connecting to an ingestion microservice running as a multi-replica deployment in EKS (K8s 1.21). The device is provisioned with a single endpoint (only one can be provisioned) to resolve and access the ingestion service pods.
The ingestion service application is configured with two listeners, one for TCP and one for UDP. These can be configured for any port (they do not have to be the same port).
Would like to front the ingestion service in EKS with a NLB for the usual reasons. Would like to define the NLB through Kubernetes (service manifest) as part of our environment automation (e.g. Terraform and Flux, etc.).
My understanding is that I cannot do this right now. I can't define a Kubernetes service with annotations that will create a single NLB in AWS that has one listener for TCP and another listener for UDP (even with the port values being different).
Further, that Mixed-protocol (TCP/UDP) Service, which was alpha in K8s 1.20, is what I would need to accomplish this.
Just wondering if my understanding is sound on this.
I think will need to wait until EKS supports a version of K8s where the mixed-protocol LB functionality is either GA or beta and enabled by default ..?
Thanks @TBBle for the confirmation - much appreciated.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Hi I need some help understanding why would quic need TCP with UDP as the same listener? isn't the whole point of using UDP is that it's available in most of devices?? The question is mainly for knowledge if someone can point me somewhere I would be thankful.
Assuming you're asking about https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/1608#issuecomment-937346660, the TCP support is for HTTP/1 and HTTP/2. As of July 2022 according to Wikipedia only 75% of web browsers support HTTP/3, so not also supporting HTTP/2 and HTTP/1 would cut off a significant fraction of users.
Also, as noted in the HTTP/3 spec, a client may prefer to attempt HTTP/1 or HTTP/2 first and then be redirected by an ALTSVC
frame/Alt-Svc
header to the HTTP/3 service. Although in this case the HTTP/3 service could be on any UDP port, having it on the standard HTTPS port means that clients that try HTTP/3 first (also valid by the HTTP/3 spec) will not need to then fall back to HTTP/2 and the be redirected to the correct port for HTTP/3, giving the best user experience for both client implementation approaches.
I expect over time that more and more HTTP clients will attempt HTTP/3 first, but since HTTP/3 is only currently supported on approximately 25% of websites (same Wikipedia article) I expect that current implementations will prefer HTTP/2 + Alternative Service to minimise time-to-first-byte in the most-common cases.
We saw the same thing in IPv6 adoption where in the early days, you'd try the IPv4 address first (since most clients didn't have global IPv6 connectivity) but over time that shifted towards "Try both and take whichever connects first" and by 2011 was widely "try IPv6 first".
Aha so the TCP is basically needed to fix the problem when UDP gets blocked by middle boxes in order to fallback to HTTP/1 or 2 on the same LB or if the client want to use them first and then upgrade to HTTP/3. Thanks @TBBle for the through answer.
Hi,
Any news on this?
I have a container sharing camera stream over RTSP. RTSP needs UDP and TCP on same port.
We added a NLB but we are not able to set port as TCP_UDP (allowed values are TCP, UDP or SCTP).
spec.ports[0].protocol: Unsupported value: "TCP_UDP": supported values: "SCTP", "TCP", "UDP"
AWS allows TCP_UDP (https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-listeners.html).
The code in this PR implements TCP_UDP support but it was last tested for v2.2.X: https://github.com/kubernetes-sigs/aws-load-balancer-controller/pull/2275.
If you want to help me update the code for the latest release and implement the suggestions from the AWS reviewer, help would be very much appreciated.
Is there an EKS version that can do TCP_UDP on NLB 1 port somehow/anyway?
Is there a way to enable MixedProtocolLBService ?
Sorry if this is not the right place to comment. i have a simple service which has both tcp and udp ports..
when i create a lb in eks, i get this error for service
service-controller Error syncing load balancer: failed to ensure load balancer: mixed protocol is not supported for LoadBalancer
can someone please guide if this is supported or not in eks(1.24) same service when applied on on-prem 1.24 cluster works fine..
apiVersion: apps/v1
kind: Deployment
metadata:
name: echoserver
spec:
selector:
matchLabels:
app: echoserver
replicas: 1
template:
metadata:
labels:
app: echoserver
spec:
containers:
- image: vhiribarren/echo-server
name: echoserver
ports:
- containerPort: 5001
protocol: TCP
- containerPort: 4001
protocol: UDP
and
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
name: echoserver
spec:
ports:
- port: 5001
targetPort: 5001
protocol: TCP
name: tcp-port
- port: 4001
targetPort: 4001
protocol: UDP
name: udp-port
type: LoadBalancer
selector:
app: echoserver
so this issue solves the problem or i have applied this wrong?
This makes the two of us @vinayus . As per docs k8s 1.24 supports it, Azure AKS supports it and this approach works fine .... except in AWS 😞
At the moment we do this by (1) creating LB & Target Group (not managed by aws-load-balancer-controller), (2) creating a Service (ClusterIP), and (3) creating a TargetGroupBinding referring to the Service from (2) and the Target Group from (1). (You can share one NLB across many Services this way, too, reducing costs).
This works well and because of the TargetGroupBinding, aws-load-balancer-controller manages the endpoints for the target group, but it's obviously more pieces to manage than if aws-load-balancer-controller could do the whole thing itself based on annotations. If you are using something like Terraform to manage both AWS resources and k8s resources, it's quite tidy and manageable, but if you are managing it manually it's probably error-prone.
This ticket is created in 2020 and support is still not added... Is there anyone that has expertise in this area and can help with that PR #2275?
For people still having this issue, I think that issue is not happening on Kubernetes version 1.25
I was having the same problem, but on 1.25 with the following commands i am able to use it properly.
kind: Service
metadata:
name: app
labels:
*
*
*
annotations:
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-type: external
spec:
ports:
- name: api
port: 8686
protocol: TCP
targetPort: 8686
- name: syslog
port: 514
protocol: UDP
targetPort: 514
- name: someother
port: 900
protocol: TCP
targetPort: 900
selector:
*
*
*
type: LoadBalancer```
@csk06 The config that you showed doesn't share an NLB across multiple Service
s, which is what this issue is about.
To update the issue with the latest versions:
EKS 1.26 AWS Load balancer controller 2.5.0
Kubenetes Manifests
apiVersion: apps/v1
kind: Deployment
metadata:
name: bind-deployment
labels:
app: bind
spec:
replicas: 1
selector:
matchLabels:
app: bind
template:
metadata:
labels:
app: bind
spec:
containers:
- name: bind
image: cytopia/bind
env:
- name: DOCKER_LOGS
value: "1"
- name: ALLOW_QUERY
value: "any"
ports:
- containerPort: 53
protocol: TCP
- containerPort: 53
protocol: UDP
---
apiVersion: v1
kind: Service
metadata:
name: bind
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "external"
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "53"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: TCP
service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "3"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "10"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "10"
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
selector:
app: bind
ports:
- protocol: UDP
name: dns-udp
port: 53
targetPort: 53
- protocol: TCP
name: dns-tcp
port: 53
targetPort: 53
type: LoadBalancer
The above fails, creating a UDP only target group in the NLB. The controller/service AWS side thinks it's all working fine...
kubectl get service bind
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
bind LoadBalancer 10.100.142.242 k8s-default-bind-3442e35570-f7a7240948fc0d92.elb.eu-west-1.amazonaws.com 53:31953/UDP,53:31953/TCP 17s
Controller Logs:
{"level":"info","ts":"2023-04-17T11:36:02Z","logger":"controllers.service","msg":"deleted targetGroupBinding","targetGroupBinding":{"namespace":"default","name":"k8s-default-bind-c1c7d775f1"}}
{"level":"info","ts":"2023-04-17T11:36:02Z","logger":"controllers.service","msg":"deleting targetGroup","arn":"arn:aws:elasticloadbalancing:eu-west-1:766648550982:targetgroup/k8s-default-bind-c1c7d775f1/e482770d4dc448e9"}
{"level":"info","ts":"2023-04-17T11:36:02Z","logger":"controllers.service","msg":"deleted targetGroup","arn":"arn:aws:elasticloadbalancing:eu-west-1:766648550982:targetgroup/k8s-default-bind-c1c7d775f1/e482770d4dc448e9"}
{"level":"info","ts":"2023-04-17T11:36:02Z","logger":"controllers.service","msg":"successfully deployed model","service":{"namespace":"default","name":"bind"}}
{"level":"info","ts":"2023-04-17T11:36:05Z","logger":"controllers.service","msg":"successfully built model","model":"{\"id\":\"default/bind\",\"resources\":{\"AWS::ElasticLoadBalancingV2::Listener\":{\"53\":{\"spec\":{\"loadBalancerARN\":{\"$ref\":\"#/resources/AWS::ElasticLoadBalancingV2::LoadBalancer/LoadBalancer/status/loadBalancerARN\"},\"port\":53,\"protocol\":\"UDP\",\"defaultActions\":[{\"type\":\"forward\",\"forwardConfig\":{\"targetGroups\":[{\"targetGroupARN\":{\"$ref\":\"#/resources/AWS::ElasticLoadBalancingV2::TargetGroup/default/bind:53/status/targetGroupARN\"}}]}}]}}},\"AWS::ElasticLoadBalancingV2::LoadBalancer\":{\"LoadBalancer\":{\"spec\":{\"name\":\"k8s-default-bind-3442e35570\",\"type\":\"network\",\"scheme\":\"internet-facing\",\"ipAddressType\":\"ipv4\",\"subnetMapping\":[{\"subnetID\":\"subnet-00422897a83381cf9\"},{\"subnetID\":\"subnet-014a4b24b2d5ecb6e\"},{\"subnetID\":\"subnet-05df8b99d50eb1b56\"}]}}},\"AWS::ElasticLoadBalancingV2::TargetGroup\":{\"default/bind:53\":{\"spec\":{\"name\":\"k8s-default-bind-5c2f99f91c\",\"targetType\":\"ip\",\"port\":53,\"protocol\":\"UDP\",\"ipAddressType\":\"ipv4\",\"healthCheckConfig\":{\"port\":53,\"protocol\":\"TCP\",\"intervalSeconds\":10,\"timeoutSeconds\":10,\"healthyThresholdCount\":3,\"unhealthyThresholdCount\":3},\"targetGroupAttributes\":[{\"key\":\"proxy_protocol_v2.enabled\",\"value\":\"false\"}]}}},\"K8S::ElasticLoadBalancingV2::TargetGroupBinding\":{\"default/bind:53\":{\"spec\":{\"template\":{\"metadata\":{\"name\":\"k8s-default-bind-5c2f99f91c\",\"namespace\":\"default\",\"creationTimestamp\":null},\"spec\":{\"targetGroupARN\":{\"$ref\":\"#/resources/AWS::ElasticLoadBalancingV2::TargetGroup/default/bind:53/status/targetGroupARN\"},\"targetType\":\"ip\",\"serviceRef\":{\"name\":\"bind\",\"port\":53},\"networking\":{\"ingress\":[{\"from\":[{\"ipBlock\":{\"cidr\":\"0.0.0.0/0\"}}],\"ports\":[{\"protocol\":\"UDP\",\"port\":53}]},{\"from\":[{\"ipBlock\":{\"cidr\":\"192.168.32.0/19\"}},{\"ipBlock\":{\"cidr\":\"192.168.0.0/19\"}},{\"ipBlock\":{\"cidr\":\"192.168.64.0/19\"}}],\"ports\":[{\"protocol\":\"TCP\",\"port\":53}]}]},\"ipAddressType\":\"ipv4\"}}}}}}}"}
{"level":"info","ts":"2023-04-17T11:36:05Z","logger":"controllers.service","msg":"creating targetGroup","stackID":"default/bind","resourceID":"default/bind:53"}
{"level":"info","ts":"2023-04-17T11:36:05Z","logger":"controllers.service","msg":"created targetGroup","stackID":"default/bind","resourceID":"default/bind:53","arn":"arn:aws:elasticloadbalancing:eu-west-1:766648550982:targetgroup/k8s-default-bind-5c2f99f91c/8ecf762ec4c4eac4"}
{"level":"info","ts":"2023-04-17T11:36:05Z","logger":"controllers.service","msg":"creating loadBalancer","stackID":"default/bind","resourceID":"LoadBalancer"}
{"level":"info","ts":"2023-04-17T11:36:05Z","logger":"controllers.service","msg":"created loadBalancer","stackID":"default/bind","resourceID":"LoadBalancer","arn":"arn:aws:elasticloadbalancing:eu-west-1:766648550982:loadbalancer/net/k8s-default-bind-3442e35570/f7a7240948fc0d92"}
{"level":"info","ts":"2023-04-17T11:36:05Z","logger":"controllers.service","msg":"creating listener","stackID":"default/bind","resourceID":"53"}
{"level":"info","ts":"2023-04-17T11:36:05Z","logger":"controllers.service","msg":"created listener","stackID":"default/bind","resourceID":"53","arn":"arn:aws:elasticloadbalancing:eu-west-1:766648550982:listener/net/k8s-default-bind-3442e35570/f7a7240948fc0d92/5a44371fac255ea2"}
{"level":"info","ts":"2023-04-17T11:36:05Z","logger":"controllers.service","msg":"creating targetGroupBinding","stackID":"default/bind","resourceID":"default/bind:53"}
{"level":"info","ts":"2023-04-17T11:36:05Z","logger":"controllers.service","msg":"created targetGroupBinding","stackID":"default/bind","resourceID":"default/bind:53","targetGroupBinding":{"namespace":"default","name":"k8s-default-bind-5c2f99f91c"}}
{"level":"info","ts":"2023-04-17T11:36:05Z","logger":"controllers.service","msg":"successfully deployed model","service":{"namespace":"default","name":"bind"}}
{"level":"info","ts":"2023-04-17T11:36:06Z","msg":"authorizing securityGroup ingress","securityGroupID":"sg-0e8b8ba05b8c46832","permission":[{"FromPort":53,"IpProtocol":"tcp","IpRanges":[{"CidrIp":"192.168.0.0/19","Description":"elbv2.k8s.aws/targetGroupBinding=shared"}],"Ipv6Ranges":null,"PrefixListIds":null,"ToPort":53,"UserIdGroupPairs":null},{"FromPort":53,"IpProtocol":"tcp","IpRanges":[{"CidrIp":"192.168.32.0/19","Description":"elbv2.k8s.aws/targetGroupBinding=shared"}],"Ipv6Ranges":null,"PrefixListIds":null,"ToPort":53,"UserIdGroupPairs":null},{"FromPort":53,"IpProtocol":"tcp","IpRanges":[{"CidrIp":"192.168.64.0/19","Description":"elbv2.k8s.aws/targetGroupBinding=shared"}],"Ipv6Ranges":null,"PrefixListIds":null,"ToPort":53,"UserIdGroupPairs":null},{"FromPort":53,"IpProtocol":"udp","IpRanges":[{"CidrIp":"0.0.0.0/0","Description":"elbv2.k8s.aws/targetGroupBinding=shared"}],"Ipv6Ranges":null,"PrefixListIds":null,"ToPort":53,"UserIdGroupPairs":null}]}
{"level":"info","ts":"2023-04-17T11:36:06Z","msg":"authorized securityGroup ingress","securityGroupID":"sg-0e8b8ba05b8c46832"}
{"level":"info","ts":"2023-04-17T11:36:06Z","msg":"registering targets","arn":"arn:aws:elasticloadbalancing:eu-west-1:766648550982:targetgroup/k8s-default-bind-5c2f99f91c/8ecf762ec4c4eac4","targets":[{"AvailabilityZone":null,"Id":"192.168.149.126","Port":53}]}
{"level":"info","ts":"2023-04-17T11:36:06Z","msg":"registered targets","arn":"arn:aws:elasticloadbalancing:eu-west-1:766648550982:targetgroup/k8s-default-bind-5c2f99f91c/8ecf762ec4c4eac4"}
I know this issue is about sharing between services but it's also the point of reference for multiple protocols in the same reference as evidenced by all the google hits that get you here talking about this controller and TCP_UDP mode.
The good news:
EKS/AWS doesn't reject the service yaml any more like it used to, by virtue of https://kubernetes.io/docs/concepts/services-networking/service/#load-balancers-with-mixed-protocol-types being available now.
$ dig @k8s-default-bind-3442e35570-f7a7240948fc0d92.elb.eu-west-1.amazonaws.com example.com
; <<>> DiG 9.18.1-1ubuntu1.1-Ubuntu <<>> @k8s-default-bind-3442e35570-f7a7240948fc0d92.elb.eu-west-1.amazonaws.com example.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20842
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: 7f17659536ea77a601000000643d30ab5837db7e312a6de7 (good)
;; QUESTION SECTION:
;example.com. IN A
;; ANSWER SECTION:
example.com. 85348 IN A 93.184.216.34
;; Query time: 20 msec
;; SERVER: 63.35.59.113#53(k8s-default-bind-3442e35570-f7a7240948fc0d92.elb.eu-west-1.amazonaws.com) (UDP)
;; WHEN: Mon Apr 17 12:42:38 BST 2023
;; MSG SIZE rcvd: 84
The bad news:
The controller provisions only the first in the array, in my case, the UDP service. Quietly ignoring the TCP service on the same port.
Yeah, looking back, only the initial request was about sharing a single NLB across multiple services, and the response was "We don't plan to do that", and then we ended up talking about TCP_UDP
support instead, apart from a brief return in December 2022-January 2023 to the "multiple services sharing an NLB", including a solution using a self-managed NLB (which is similar to what's in the docs AFAICT).
For multiple services sharing an NLB, the existing TargetGroupBinding
feature plus an implementation of NLB and TargetGroup-creation primitives would allow that. I was going to suggest the latter would make sense in the ACK project, but they referred NLB support back to here.
For TCP_UDP
support, the only PR I'm aware of was #2275, and the PR creator hasn't commented there in over a year, so I assume it's off their radar now.
SEO might be promoting this issue over #2275 for the TCP_UDP
conversation, and people might click "Dual TCP/UDP NLB" before they read "...shared across multiple services".
Aside from the above mentioned QUIC and DNS scenarios there is also SIP (5060). It is helpful if SIP/UDP can switch to SIP/TCP when a message exceeds MTU, at the same load balancer IP address.
If there is no intention to support the original request directly (sharing a single NLB across multiple services) should this issue be closed with TargetGroupBinding
as the official answer? The conversation about TCP_UDP
can be directed to the other issue.
Gratitude to everyone for the invaluable discussions that immensely aided our previous project development. I've documented our successful integration of AWS NLB with Kubernetes, exposing the same 5060 port for both TCP and UDP, along with 5061 for TLS in a single load balancer instance with Kubernetes service. For more insights, check out our blog here: https://dongzhao2023.wordpress.com/2023/11/11/demystifying-kubernetes-and-aws-nlb-integration-a-comprehensive-guide-to-exposing-tcp_udp-ports-for-sip-recording-siprec/
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
So is this 3 and a half year old issue going to be fixed or currently not worth the time of anyone? @TBBle
https://github.com/kubernetes-sigs/aws-load-balancer-controller/pull/2275#issuecomment-2017397659 suggests the last person to be putting time into trying to implement TCP_UDP support for the AWS Load Balancer Controller (for same-port TCP and UDP services) hasn't had any time to put into this recently, no.
https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/1608#issuecomment-1806901267 describes a successful work-around where you actually manage the load balancer in Terraform or similar and then use the AWS Load Balancer Controller's TargetGroupBinding
to bind that NLB to your Kubernetes Service
.
So basically, no change in status compared to https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/1608#issuecomment-1511348437.
If you're actually looking for a solution to the use-case described in the head of the ticket, i.e. sharing a single NLB across multiple services with different TCP/UDP ports) that comment links to a documented solution, also using TargetGroupBinding
to connect Service
s to an externally-managed NLB.
we use nginx ingress controller or similar kong, for this, which can be used to have a single NLB for whole of EKS
Hi there,
I would like to share a single NLB across two services with both UDP and TCP ports open.
For example: serviceA - Port 550 UDP serivceB - Port 8899 TCP
I couldn't seem to find a way to do this unless using an application load balancer and ingress routes.
Is there a way to do this in the v2.0.0 release?
The major blocker was that the targetgroup annotation was only supported at an ingress level (not a service level) so there just seems no way to share a LB.