Closed r7vme closed 6 years ago
This is interesting, for now I would say IC is more important than service LBs, but we should talk to MS if there's another way.
My team will anyway work on managed services inside guest clusters in Q1 so we might make IC optional/customizable anyway and then we might just open up option for starting cluster without IC.
/cc @teemow @marians as this is sth of kinda high impact
This is lying around in the SIG operator backlog. Any movement? What can we do? Should we just ignore it?
We can
I don't know user impact. This is just technical POV
How likely is that solved upstream?
I'll ask in kubernetes slack.
@r7vme
According to https://kubernetes.slack.com/archives/C5HJXTT9Q/p1520510302000070
Single LB in azure can handle 10 frontend IPs. One is allocated for ingress so the rest is free for LB Services. According to that you can create up to 9 LB services. Is that correct?
So it looks like we have just a limitation. Please confirm.
Single LB in azure can handle 10 frontend IPs. One is allocated for ingress so the rest is free for LB Services. According to that you can create up to 9 LB services. Is that correct?
Yes, it's correct. :) But it does not solve the problem.
Problem that if you do kubectl apply -f xxx.yaml
, where xxx.yaml
is smth like
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
type: LoadBalancer
Kubernetes will create NEW load-balancer. It does not support reusing same load-balancer afaik.
Maybe it'll create a new LB and then you can have 9 more services running in the same LB. Adding single NIC has potential to solve the problem to some extent. Need to check.
What I saw in action few days ago on Azure was that new service of type loadbalancer only resulted in a new IP (but not sure what went on behind the scenes). This seems to be the intended way on Azure, not adding LBs, but giving IPs, cause each LB supports more than a single IP. If we have questions here, we can ping Stuart to see if he can inquire internally.
We can also ask Dennis, who showed above service creation in his training.
Someone just need to do simple test: 1) create guest cluster 2) create service with type LoadBalancer 3) See what happened.
Next steps depending on result:
If it will reuse LB created by kubernetes (e.g. it filters LB based on tags), then we can use these tags for our ingress LB, so Kubernetes will reuse ingress LB.
Let me know if chat needed.
This is lying around in the SIG operator backlog. Any movement? What can we do? Should we just ignore it?
I think we still need to test it, or did someone do the test described above? It should work, at least from what I'm hearing from the MS guys.
Are we fine with closing it? We have a few pilots on Azure. Nobody complains and nobody is keen to test that.
Fine with me.
Oki doki!
Azure does not allow to associate more than ONE load balancer per VM (per NIC). By default azure-operator creates ingress load balancer attached to all worker VMs. So only one load balancer slot already occupied, so users will not be able to create azure load balancer via kubernetes.
This bug just opens a discussion.
I have two questions: 1) Do we want to support ability to create Azure load balancers via kubernetes? Of not this bug can be closed. 2) If yes, how we want to design solution?
cc: @puja108 @kopiczko