Open arcalys opened 1 year ago
Hi @arcalys I confirm this is today a solution we are looking to, both to handle a smooth transition to a new load balancer and to allow, as you can today do with CSI, adding custom controllers to our managed ones. I need the team to confirm full feasbaility and supportability before entering this officially to the roadmap, but i hope to do it so soon.
Hi @arcalys @mhurtrel we are exactly searching for the same support to implement exactly the same use case (PureLB as private LB in MKS).
Hello, I confirm this is a solution the new loadbalancer will support. I should be able to give you an ETA pretty soon :)
any update on this?
Hello @surprisingb , sorry for the late reply. At least I have a good news: this issue is now fixed !
spec.loadBalancerClass
is now taken into account so you can use your own operator.
Please note that if not specified or null
, a OVH LB will be created automatically.
@antonin-a thank you.
Is there an example or something that documents this feature?
I've been trying to implement metallb with a custom loadBalancerClass
, but I'm not sure it's working as I'd like it to be.
@surprisingb if you chose to use metallb or purelb solutions you should rely on the official docs (ex: https://metallb.universe.tf/installation/) as there is nothing specific to OVHcloud Managed Kubernetes Service. What is the blocking point you are facing ?
Please note that if your use case is private <> private LB there is an ongoing private beta to use Octavia as LB for MKS, it might be a solution.
@surprisingb if you chose to use metallb or purelb solutions you should rely on the official docs (ex: https://metallb.universe.tf/installation/) as there is nothing specific to OVHcloud Managed Kubernetes Service. What is the blocking point you are facing ?
What I'm trying to do is create a load balancer w/ metallb on a private IP inside my vrack CIDR, I setup both metallb and ingress-nginx to use a custom loadBalancerClass, but when I apply the charts a managed k8s load balancer is created on OVH with a public IP. There are also some issues, I think, with L2 mode as I see traffic on the ingress coming from the public IP of the node instead of the private IP, despite I set the interface binding on the internal one
Please note that if your use case is private <> private LB there is an ongoing private beta to use Octavia as LB for MKS, it might be a solution.
This could be useful, how can I apply?
I setup both metallb and ingress-nginx to use a custom loadBalancerClass, but when I apply the charts a managed k8s load balancer is created on OVH with a public IP.
Hello @surprisingb, sorry it is my mistake. As we are currently working on the new Octavia implementation there is few changes on the way we are managing the Cloud Controler Manager and LB creation. What I can suggest: if you think that our Octavia private beta can match your requirements it probably makes sense to test it rather than deploying your own solution (based on metallb).
This could be useful, how can I apply?
You can join our Dicord using this link : https://discord.gg/ovhcloud Then "Container & Orchestration" > "beta-info-containers-orchestration" Send me a PM on Discord to get the access code (user: antonin.anchisi)
Since Kubernetes 1.24, spec.loadBalancerClass for services is stable. We'd benefit from the OVH controller implementing it in order to use private<>private LB scenarios in MKS.
This would allow using metallb/purelb for such cases until there is private support on your end (https://github.com/ovh/public-cloud-roadmap/issues/104) and on the long term this would allow your customers to have multiple load balancer implementations in their clusters.