haproxytech / vmware-haproxy

Apache License 2.0
52 stars 25 forks source link

Static routes from frontend to "isolated" workload networks #15

Closed souhradab closed 3 years ago

souhradab commented 3 years ago

version: HA Proxy Load Balancer API v0.1.10

I have a 3 NIC HA Proxy setup.
NIC 1: MGMT: Default Gateway is configured here.
NIC2: Primary Workload NIC3: Frontend

I have peculiar management network setup. My environment is setup such that the MGMT network where my ESXi hosts, vCenter, Supervisor MGMT, and HA Proxy MGMT all reside, does not have a route to the Workload networks. It's essentially an air gapped management network.

My Tanzu cluster setup contains a Primary, and two additional "isolated" Workload networks. Traffic that enters the HA Proxy Frontend and is destined for backends on the Primary Workload network reaches those backends fine because the HA Proxy NIC2 is directly connected.

However, the issue I run into is that when traffic enters the HA Proxy Frontend, and is forwarded to the destination backends located on the isolated Workload networks it is being sent to the Default Gateway on the management interface, and this network cannot reach the secondary workload networks. I thought by adding some values in the route-tables.cfg for the isolated workload networks I would be able to configure static routes for the Frontend network, but either this does not work the way I was thinking it would, or I am getting the syntax wrong.

In the end I was able to work around my issue by adding static routes into the Frontend network-scripts file (/etc/systemd/network/10-frontend.network).

brakthehack commented 3 years ago

Hey Bill, thanks for filing the issue.

It's essentially an air gapped management network

This by itself shouldn't be an issue.

two additional "isolated" Workload networks.

Can you elaborate a bit what you mean by isolated here. Is it that these networks are not routable to each other, but they are routable to the primary WL network? We only support a configuration in which workload networks are L3 routable to each other. Apologies if that is the case here and I'm simply misinterpreting your comment.

Do you mind sharing which routes you added as workarounds and why this was sufficient for your needs?

Could you also clarify a bit more what you are asking? Is it support for such a network topology in general or automation around the route configuration?

souhradab commented 3 years ago

The workload networks are routable to each other.

Perhaps this will help: tanzu-vsphere-networking-overview

souhradab commented 3 years ago

Its more or less the section in the VMware documentation on Tanzu with vSphere Networking Topologies document called "Topology with Multiple Isolated Workload Networks":

https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-C86B9028-2701-40FE-BA05-519486E010F4.html

And the HA Proxy 3 NIC configuration

brakthehack commented 3 years ago

Thank you for the excellent diagram. It is very helpful in understanding what you're accomplishing.

We have two options.

  1. Make the workload network the default gateway instead of the management network. This means that anything that is routable via the management network will no longer be accessible. At present this is only expected to be vCenter, but if that changes in the future there could be an issue. At present though it's not an issue.
  2. Add logic to issue a NIC per workload network. This is useful because it's clear how workload networks would be routed and would not change existing behavior. However, it comes with downside as the user must enter all of these into the configuration screen. If a user has a large number of workload networks, they could make mistakes. Terminology would also likely have to change which could cause some confusion for users.

@souhradab I want to get your opinion whether you prefer one option or another. Thanks!

brakthehack commented 3 years ago

@daniel-corbett we would welcome your advice if you're interested as well.

souhradab commented 3 years ago

Regarding (1), without binding your options maybe just giving the option for custom (static) routes will help solve this for people with special setups like mine. You could even till give the option of setting any interface as default GW.

Regarding (2), NIC per workload network will likely run into VM virtual NIC limits in larger environments.

brakthehack commented 3 years ago

Regarding (1), without binding your options maybe just giving the option for custom (static) routes will help solve this for people with special setups like mine. You could even till give the option of setting any interface as default GW.

This is true, but we also want to keep things simple. By default, workload networks must be routable to each other, which means defaulting to the workload gateway would be good enough for the majority of users. Allowing users to add specific routes by default may also encourage complex configurations which may increase the cost to debug for both users and VMware/HAProxy.

For more complex configurations, users are free to make edits to route configurations as they wish once the appliance is stood up. In this setup, they are the owners of the appliance. It's unclear if adding passthrough code for routes as a tradeoff for an even more complex configuration is a net benefit over a default gateway.

I agree with your point on NIC limits. If we expect a large number of networks for some users, this is probably not a viable option if we want to trouble that user segment.

brakthehack commented 3 years ago

After more thought, making workload the default gateway may make it harder for some users to manage the appliance over something like SSH. Routes, as you suggested, might be the way to go here.

souhradab commented 3 years ago

maybe some sort of menu item: default GW on management or workload network? And then, based on the choice: static routes needed, if necessary, on the interface that is NOT the default GW assigned interface.