Closed nadtoka closed 3 weeks ago
I have checked the code of mentioned above 3 modules and have not seen there listeners resources.... But if I can manage health check using your modules then please let me know.
Greetings,
As you have also checked and confirmed, the listener resources are not created by the Terraform. Instead they are created by the loadbalancer type services in kubernetes. You can find them out by running kubectl get svc -A | grep -i loadbalancer
. For most common use cases, this service is likely created by your chosen ingresscontroller helm chart.
In more detail, the loadbalancer type service(s) in CCE Clusters that are annotated with the kubernetes.io/elb.id
will target that ELB and create/update/manage listeners on that ELB based on the port and backend configurations inside the service. Since most kubernetes clusters will expose other services via an ingresscontroller, this is the most common case where a loadbalancer service is created. However, it is possible to connect an ELB directly to a pod using loadbalancer services for some scenarios where a layer7 reverse proxy (ingress) is not desired.
As described in OTC Docs - Creating a LoadBalancer Service you can disable the health check for the created listeners via kubernetes.io/elb.health-check-flag: off
annotation. Alternatively, it can also be configured via kubernetes.io/elb.health-check-option
.
I hope this clarifies the topic. Can
Here are some additional notes on alternatives of creating ELBs and listeners and why we recommend doing it the way we do:
Create both ELB and Listeners via Terraform: While this is possible via opentelekomcloud_lb_listener_v2
, it requires prior knowledge of the IP and node ports of the worker nodes prior to the installation of the application that creates the loadbalancer. Furthermore, the resulting configuration will be static and if the pods of the application switches to a different node because of balancing/scaling/failure, the configuration update will require a reapply in Terraform, making this a really bad solution for the dynamic nature of kubernetes clusters.
Create both ELB and Listeners via CCE loadbalancer service: This solution can dynamically handle all the needs and even spawn new loadbalancers as required via kubernetes.io/elb.autocreate
annotation (see docs). The problem with this solution is the fact that both the private and EIP of the loadbalancer is also autocreated and are dynamic. As a result, if the service is deleted, the EIP will be released and DNS records will be invalid. Considering that the DNS is cached across a chain of servers and propagation takes time, EIP changes on the ELB can be quite problematic and cause downtimes.
Create ELB via Terraform and Listeners via CCE loadbalancer service: This is the solution we use and recommend to have the best of both worlds. Since the ELB and EIP are created via Terraform, CCE and resources in CCE does not destroy or modify the EIP. Listeners, on the other hand, are configured dynamically and updated according to the changes inside the cluster, including both the auto assigned node ports and node IP addresses where the targets of the service resides. The biggest downside of this method is the necessity to pass the ELB id from the Terraform into the Helm chart or manifest if the application is not deployed by Terraform and by ArgoCD for example.
Let me know if you need further details on any of these topics and I'll try to clarify them to the best of my ability. Can.
Is there a way to disable health check for some of the listeners using your modules?
We already use your VPC, SNAT and CCE modules.
Currently we have a separate resource for ELB creation ( resource "opentelekomcloud_lb_loadbalancer_v2" "elb" ) and it looks like listeners are automatically being created as part of some of the mentioned above modules or included helm charts.
In any case I am trying to understand how listeners are created and how to enable disable health check for some of them.
Would be very grateful for help