Closed petrutlucian94 closed 3 years ago
@petrutlucian94 thanks for the issue report, I think it's reasonable I myself don't know much about kube-vip so will do more homework the question I am having is if we are using provider network (no tenant network at all) , with the new option, it should be also ok to deploy even without kube-vip ?
Even when using Octavia, it would be beneficial to have such an option to not use FIPs and (in our case) set a fixed VIP instead. We run Octavia on Neutron with OVN and are unable to connect to the FIP from the nodes, as the return packets have the wrong sender address. As a workaround, we use static VIPs instead of FIPs.
the question I am having is if we are using provider network (no tenant network at all) , with the new option, it should be also ok to deploy even without kube-vip ?
Yep, kube-vip is not required. However, some form of load balancing would be desired in order to allow scaling or upgrading the cluster, in which case Octavia might be used as an alternative.
Even when using Octavia, it would be beneficial to have such an option to not use FIPs and (in our case) set a fixed VIP instead. We run Octavia on Neutron with OVN and are unable to connect to the FIP from the nodes, as the return packets have the wrong sender address. As a workaround, we use static VIPs instead of FIPs.
Thanks for feedback. I think we'd have to change this part a bit and maybe assign the control plane address as a (secondary?) fixed IP instead of a floating IP: https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/bff2282b48e770e921b91615040bdb97ddfa2c02/pkg/cloud/services/loadbalancer/loadbalancer.go#L52-L62
Thanks for feedback. I think we'd have to change this part a bit and maybe assign the control plane address as a (secondary?) fixed IP instead of a floating IP:
then we have several options==> no FIP(provider network) , with VIP, with FIP ? and maybe more?
then we have several options==> no FIP(provider network) , with VIP, with FIP ? and maybe more?
I've submitted a simple PR, adding just the useControlPlaneFIP
option for now. there are two cases:
Machine
spec or using a static kube-vip
podPlease let me know if you're ok with it.
This is related to #875
This is implemented by #973
/close
@hidekazuna: Closing this issue.
/kind feature
Describe the solution you'd like
When not using Octavia, CAPO expects the control plane endpoint address to be a floating IP and tries to associate it with the instance port: https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/1f7a85455539f70d52fbc8f4f31241b71ed66e11/controllers/openstackmachine_controller.go#L363-L380
This is not always the case, some Openstack clouds do not use floating IPs at all. We're assigning the control plane address using kube-vip at the guest level, thus avoiding the Octavia dependency.
We'd like to have a config option such as
useControlPlaneFIP
to allow skipping the floating IP assignment.Anything else you would like to add:
It's a trivial change, I can go ahead and open a PR. I wanted to check that you're ok with it first.