Open aslatter opened 9 months ago
Additional context:
Our AKS clusters are typically deployed with two Kubernetes Services of type load-balancer, each corresponding to a different public IP address with different IP-allow-lists on them, mapped to different Kubernetes ingress controllers.
So having this configuration be per Kubernetes Service (and not a global AKS-level setting) is important to us.
Is your feature request related to a problem? Please describe. We are interested in establishing private connectivity to our AKS-hosted services to third-parties in separate Azure tenants.
We would like to do this with a Private Link Service. A Private Link Service is an Azure resource we can place in front of an Azure load-balancer to make the application behind the load-balancer privately accessible.
We can do this with the LBs provisioned as a part of AKS, however these load-balancers get deleted and recreated whenever we need to re-create the AKS cluster. New AKS features often require provisioning a new AKS cluster, and we sometimes provision a fresh control-plane to resolve production issues.
Whenever we re-create the AKS cluster we would be forced to re-create the private-link-service as well, which would force the third-parties to request access to the new private-link-service. This would require a service disruption and potential network-reconfiguration on the side of the third-party.
Describe the solution you'd like We would like the ability to link a Kubernetes Service object to an existing Load Balancer back-end pool which we managed.
EKS offers a similar feature with a CRD tying the service to an LB: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.6/guide/targetgroupbinding/targetgroupbinding/
Describe alternatives you've considered
We could introduce a load balancer in front of the kubernetes-managed load balancer, but our understanding is that we would need some VM-hosted appliance to route traffic between the LBs. We're not interesting in managing things like OS-patches and OS-upgrades ourselves, and making this appliance highly available would be complex.
We could front our services with a separate Azure VNet and ask the third parties to peer to that, and place the private-link services in the vnet we managed. That way, we would be controlling both ends of the private-link and could manage failing over to a second AKS control-plane without service disruptions (with some DNS manipulation). This solution is more complex, and requires pushing additional DNS-management onto the third-parties - if the other party were managing the private-endpoint themselves they could take advantage of Azure Private Zone integrations.
Additional context