remche / terraform-openstack-rke

Terraform Openstack RKE
Mozilla Public License 2.0
37 stars 20 forks source link

Configure LoadBalancer for openstack_cloud_provider #79

Closed hfrenzel closed 3 years ago

hfrenzel commented 3 years ago

This PR adds configuration for the rke_cluster openstack_cloud_provider load_balancer block.

remche commented 3 years ago

@hfrenzel thanks for the work on this ! Description may be confusing though : we can imagine using Octavia to LB master nodes too, like described in #26. May be should we split octavia_lb_master and octavia_cloud_provider ?

hfrenzel commented 3 years ago

So you ment to have use_octavia=true at the terraform openstack provider as well and adding appropriate LB resources for either the master nodes or the edge nodes if they are enabled?

I think this is doable somehow, but rather than use some octavia_lb_master, I'd like to have something like var.enable_loadbalancer and var.os_use_octavia to handle this at the master or edge nodes.

remche commented 3 years ago

So you ment to have use_octavia=true at the terraform openstack provider as well and adding appropriate LB resources for either the master nodes or the edge nodes if they are enabled?

That was the goal of #26, but splitting the feature makes sense : one variable for enabling octavia at nodes level and another one to enable cloud_provider load_balancer block.

I'm note sure using octavia lbaas for edge nodes is relevant though, what use case do you think about ?

I think this is doable somehow, but rather than use some octavia_lb_master, I'd like to have something like var.enable_loadbalancer and var.os_use_octavia to handle this at the master or edge nodes.

I'm not rigid on name as long as description is obvious ! ;)

hfrenzel commented 3 years ago

In our current experiment, we got 2x edge and Octavia LBaaS on layer 4 (TCP) in front of them. The three masters just have the floating_ip, but no LB. Why do you think, the LB is just relevant for the master nodes in such setup with edge nodes? I think I could prepare something for #26.

hfrenzel commented 3 years ago

This creates the loadbalancer resources now. There are pools for http & https with the edge nodes if configured, or the master nodes as members. It should work with Neutron networking, but I just have Octavia LBaaS at hand thus I cannot test it on Neutron networking.

To enable octavia, one needs to set provider.openstack.use_octavia = true and optionally var.use_octavia at the module to enable it for the cloud_provider too.

remche commented 3 years ago

In our current experiment, we got 2x edge and Octavia LBaaS on layer 4 (TCP) in front of them. The three masters just have the floating_ip, but no LB. Why do you think, the LB is just relevant for the master nodes in such setup with edge nodes?

Because edge node would use a LoadBalancer from cloud_provider in my sense. But if you have a use case and are happy with it, it's good to me !

This creates the loadbalancer resources now. There are pools for http & https with the edge nodes if configured, or the master nodes as members.

Would'nt be relevant to have a LB for edge nodes AND master nodes (for the k8s API) ?

It should work with Neutron networking, but I just have Octavia LBaaS at hand thus I cannot test it on Neutron networking.

That's not a pb as Neutron LBAAS is deprecated since a while.

hfrenzel commented 3 years ago

In our current experiment, we got 2x edge and Octavia LBaaS on layer 4 (TCP) in front of them. The three masters just have the floating_ip, but no LB. Why do you think, the LB is just relevant for the master nodes in such setup with edge nodes?

Because edge node would use a LoadBalancer from cloud_provider in my sense. But if you have a use case and are happy with it, it's good to me !

Hmm, we got the LB on front of the edge nodes with just node-role.kubernetes.io/edge=true, and enabled nginx. Thus, the rancher_server deployment will be reachable via the LB floating ip then.

I could add the LB resources for :6443 (KubeAPI) too in case it is required. (I'll update this PR in few minutes)

For the cloud_provider configured LB, we're using it f.e. from within the Cloud Foundry (kubecf) deployment, as it will create three LBs there.

This creates the loadbalancer resources now. There are pools for http & https with the edge nodes if configured, or the master nodes as members.

Would'nt be relevant to have a LB for edge nodes AND master nodes (for the k8s API) ?

I think it depends on which labels are set and how the workload is scheduled. For my PoV, the edge nodes should be the only ones with "external" traffic, where the master nodes are just for etcd/controlplane and no additional workload should be scheduled there (at best, they shouldn't be reachable from outside if there are edge nodes in use).

It should work with Neutron networking, but I just have Octavia LBaaS at hand thus I cannot test it on Neutron networking.

That's not a pb as Neutron LBAAS is deprecated since a while.

OK :)

remche commented 3 years ago

Thanks again for the work on this ! Last thing, would you mind adding few lines presenting the feature on the README file ? Thanks !

remche commented 3 years ago

@hfrenzel thanks a lot for your work on this !