Closed vmasule closed 2 years ago
+1 I tried specifying two distinct services at different ports with the same lb-name (using anotation service.beta.kubernetes.io/aws-load-balancer-name).
The LB is created but only with the listeners specified in the last service deployed (listeners are overwritten on LB not merged)
@indiketa Thanks for trying this, yes I think there is not enough documentation on this, it's so easy to get that working for ALB using ingress rules and there is some clarity on how to do that for ALB but for NLB completely missed.
@here Maintainers might be aware whether it is possible nobody is responding, I guess they must be very busy with other stuffs. But please respond if passible, even simple Not Possible will help.
@indiketa, @vmasule Prior issues reported on this matter - #1545, #1707 Unlike ALB, NLB doesn't have listener rules to forward traffic to separate target groups for the given listener port. In addition, k8s service closely maps to an NLB. For example, multi port service would map to an NLB with multiple listeners.
There is a possibility to use single NLB for multiple k8s services, but you'd have to create the NLB and target groups separately. Here are the steps
Limitations
@kishorj I actually did what you have suggested here. But after applying targetGroupBinding I can't see any changes.
apiVersion: elbv2.k8s.aws/v1beta1
kind: TargetGroupBinding
metadata:
name: <name>
namespace: <service-namespace>
spec:
serviceRef:
name: dlv1a
port: 5032
targetGroupARN: <arn>
I am using something like this. I apply using kubectl
. It gets applied successfully.
Description looks something like this. But I don't see anything happening with the target group.
Am I missing something here?
Namespace: <name>
Labels: <none>
Annotations: <none>
API Version: elbv2.k8s.aws/v1beta1
Kind: TargetGroupBinding
Metadata:
Creation Timestamp: 2021-08-19T17:33:27Z
Finalizers:
elbv2.k8s.aws/resources
Generation: 1
Managed Fields:
API Version: elbv2.k8s.aws/v1beta1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.:
v:"elbv2.k8s.aws/resources":
Manager: controller
Operation: Update
Time: 2021-08-19T17:33:27Z
API Version: elbv2.k8s.aws/v1beta1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:serviceRef:
.:
f:name:
f:port:
f:targetGroupARN:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2021-08-19T17:33:27Z
Resource Version: 17392
UID: c39630e0-f985-4b9e-9447-4c344d0db724
Spec:
Service Ref:
Name: dlv1a
Port: 5032
Target Group ARN: <arn>
Target Type: instance
Events: <none>```
@vg-vaibhav, what is your service type? Do you see any errors in the controller logs?
@kishorj My service is go service having a TCP server. I have 100s of these TCP servers running on multiple ports. I don't want to create unique NLB for them. So i currently manage this through multiple target groups behind couple of NLBs.
But in k8 I can't achieve this. Hence I am trying to manage NLB externally of k8.
No errors in controller logs. These are the logs of aws-lb-controller
I0819 17:14:03.073781 1 leaderelection.go:242] attempting to acquire leader lease kube-system/aws-load-balancer-controller-leader...
{"level":"info","ts":1629393243.073862,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
{"level":"info","ts":1629393243.1741655,"logger":"controller-runtime.webhook.webhooks","msg":"starting webhook server"}
{"level":"info","ts":1629393243.1762042,"logger":"controller-runtime.certwatcher","msg":"Updated current TLS certificate"}
{"level":"info","ts":1629393243.1765256,"logger":"controller-runtime.webhook","msg":"serving webhook server","host":"","port":9443}
{"level":"info","ts":1629393243.1782875,"logger":"controller-runtime.certwatcher","msg":"Starting certificate watcher"}
I am not sure if I am missing any step.
I did kubectl apply -f tgb.yaml
. For which the description I pasted on the first comment.
@vg-vaibhav, the logs from the controller pod indicate that it is waiting for the leader election lock, so it is not the active controller. Could you check the logs from the other controller pod? If you installed the v2.2.0 helm chart, there are two replicas for the controller - one active and another standby.
@kishorj Got the issue. I was looking into wrong pod.
The controller says the service should be NodePort
or LoadBalancer
to use TargetGroupBinding
.
I wasn't aware of this. Wasn't mentioned in the documentation.
Thank you :)
@vg-vaibhav, you'd need a type NodePort service in your case since the target type is instance. If you used IP target type, ClusterIP would have worked as well.
I see. I think I missed this detail. Thank you so much @kishorj
closing this issue since we don't plan to support "service groups" since k8s service maps closely to an NLB model.
So the way to have multiple services in a single NLB is using the TargetGroupBinding
? Can I create the NLB and TargetGoup by using any yaml annotation throughout kubectl or do I need to create them manually in AWS for then launch TargetGroupBinding
?
I am also interested in multiple services working with a single NLB and the way to achieve this configuration: just with YAML or manually in AWS Console + YAMl config ?
@kishorj
closing this issue since we don't plan to support "service groups" since k8s service maps closely to an NLB model.
This mapping sounds strange. If NLB can handle all the traffic, don't you think it's a good idea to reuse it for several services?
My situation: We were planning to migrate from AWS ECS to EKS. We are running 20 microservices and using one shared NLB which can handle all the traffic and costs us 20$ per month. If we start using EKS, we will pay 400$ per month. For each environment. It's a waste of money.
@vg-vaibhav I am interested to know how you manage the TargetGroup creation outside of EKS? Is it a manual or automated solution?
We too are looking at a similar solution as the AWS LBC does not allow a shared NLB even though the TargetGroups are using different ports, we would have liked a similar solution as the ALB that allows the grouping function based on the annotation, this model allows us to have one ALB with many services behind it, but AWS LBC for NLB does not allow this even though manually (CLI or CFN) I can create an NLB and create multiple target groups, attach them to the single NLB and all is well.
Would you be willing to share how you manage the creation of the TargetGroups outside of EKS? Is it an automated solution?
@krabradosty We too are in exactly the same situation and with the AWS LBC for NLB it creates an NLB per microservice, in our case that means a couple hundred NLB's. Once we have migrated out of ECS, then we can use the internal Service Discovery of K8s and won't need an NLB anymore, but we still would have a few services (about 10) that would be using the NLB and that means 10 NLB's.
Surely the AWS Load Balancer Team can see the dillema we are in?
Why allow ALB's to be grouped and have listeners based on different ports, host headers and paths but disallow NLB's that are grouped that have listeners using different ports? What am I not understanding about the statement "since k8s service maps closely to an NLB model"?
Surely some logic can be placed within the loadbalancer controller to even just watch for an annotation in a manually created targetgroupbinding and automatically create the TargetGroups AT LEAST, I don't mind creating the templating for a targetgroupbinding within helm to handle the targetgroupbinding portion but having to go manually create the targetgroup and then copy paste the targetgroupARN into the targetgroupbinding is painful to say the least. Maybe someone needs to write a targetgroupbinding controller to handle this?
Anyways, its clear @kishorj has made his point vehemently and that this WILL NOT be a feature, so my question to you @kishorj is how do we then manage the creation of the targetgroups and automatically or programatically within EKS/K8s? Do we need to write our own solution or does something exist out there? As a heavy AWS user I would expect something to exist from the AWS EKS or AWS LBC teams? Surely we cannot be the only ones with this problem?
Currently based on your advice @kishorj, this is the steps one could follow:
kind: TargetGroupBinding
kubectl apply -f targetgroupbinding-$SVC_NAME
or have helm template the TargetgroupbindingRinse and repeat for hundreds of microservices.
@TheBaus We are using terraform to auto create target groups attached to a load balancer(NLB). To create them we are using a JSON containing list of key value pairs of port and service name. This creates all the Target groups that we need.
To create a binding(connect target group to service IP) we are using Target Group Binding. We keep the service name same while spinning up in EKS. aws-lb-controller makes sure to auto map EKS endpoint to your dedicated target group.
This allows us to direct traffic through single NLB to 200 different TCP microservices.
Im fully on the side of @TheBaus . For example in our enterprise we are maintaning a kubernetes platform based on eks for internal customers which only have access to kubernetes. We have a seperation of concern here and really dont want to give our customers the permissions to create anything in the AWS Account itself. So they would not be able to create TargetGroups by themselves
I am also interested in re-using a single NLB to expose multiple services. This reduces complexity and cost.
Why allow ALB's to be grouped and have listeners based on different ports, host headers and paths but disallow NLB's that are grouped that have listeners using different ports? What am I not understanding about the statement "since k8s service maps closely to an NLB model"?
Full ack from my side. I don't understand this statement either.
It would be great if the LB controller would be able to create target groups. This would help to avoid the procedure of creating target groups with Terraform/CF/CDK and then referencing their ARNs in YAML files. As we can see in this thread, each person builds a custom solution to get this solved.
As an intermediate step I would already be happy if I could specify an existing NLB through an annotation on a Kubernetes Service
and additionally specify a target group NAME on that NLB with another annotation. The LB controller could then create a TargetGroupBinding
. In my case I am not just working with internal Helm charts, but also with 3rd party Helm charts and most of them expose properties to set service annotations nowadays, since (most) cloud provider controllers allow modification of their LBs through annotations.
It would be great to have nlb.group.name like alb.group.name...
Currently if I set the same NLB name it works (it share the same NLB) but the first that deletes the Service deletes the NLB too...
Is there anything I can do to achieve shared NLB ?
I can see this issue is closed, but can't see why. Is there any solution for reusing single nlb appeared?
Problem Currently this is possible with ALB with ingress annotation
group.name
but what we need is load balance that can work for TCP as well as HTTP and that only can be done using NLB but we hit the hard problem where we don't know how to use single NLB with multiple Services and Namespaces. Another issue is, AWS has 50 NLB limit per region and we are sure will hit that limit soon if we plan for one NLB/Service.Describe the solution you'd like to know Guideline to use single NLB with multiple services which are created in different namespaces.
Please provide information on this if possible, if not then what is the best possible way this can be done?