Closed souhradab closed 3 years ago
Hey Bill, thanks for filing the issue.
It's essentially an air gapped management network
This by itself shouldn't be an issue.
two additional "isolated" Workload networks.
Can you elaborate a bit what you mean by isolated here. Is it that these networks are not routable to each other, but they are routable to the primary WL network? We only support a configuration in which workload networks are L3 routable to each other. Apologies if that is the case here and I'm simply misinterpreting your comment.
Do you mind sharing which routes you added as workarounds and why this was sufficient for your needs?
Could you also clarify a bit more what you are asking? Is it support for such a network topology in general or automation around the route configuration?
The workload networks are routable to each other.
Perhaps this will help:
Its more or less the section in the VMware documentation on Tanzu with vSphere Networking Topologies document called "Topology with Multiple Isolated Workload Networks":
And the HA Proxy 3 NIC configuration
Thank you for the excellent diagram. It is very helpful in understanding what you're accomplishing.
We have two options.
@souhradab I want to get your opinion whether you prefer one option or another. Thanks!
@daniel-corbett we would welcome your advice if you're interested as well.
Regarding (1), without binding your options maybe just giving the option for custom (static) routes will help solve this for people with special setups like mine. You could even till give the option of setting any interface as default GW.
Regarding (2), NIC per workload network will likely run into VM virtual NIC limits in larger environments.
Regarding (1), without binding your options maybe just giving the option for custom (static) routes will help solve this for people with special setups like mine. You could even till give the option of setting any interface as default GW.
This is true, but we also want to keep things simple. By default, workload networks must be routable to each other, which means defaulting to the workload gateway would be good enough for the majority of users. Allowing users to add specific routes by default may also encourage complex configurations which may increase the cost to debug for both users and VMware/HAProxy.
For more complex configurations, users are free to make edits to route configurations as they wish once the appliance is stood up. In this setup, they are the owners of the appliance. It's unclear if adding passthrough code for routes as a tradeoff for an even more complex configuration is a net benefit over a default gateway.
I agree with your point on NIC limits. If we expect a large number of networks for some users, this is probably not a viable option if we want to trouble that user segment.
After more thought, making workload the default gateway may make it harder for some users to manage the appliance over something like SSH. Routes, as you suggested, might be the way to go here.
maybe some sort of menu item: default GW on management or workload network? And then, based on the choice: static routes needed, if necessary, on the interface that is NOT the default GW assigned interface.
version: HA Proxy Load Balancer API v0.1.10
I have a 3 NIC HA Proxy setup.
NIC 1: MGMT: Default Gateway is configured here.
NIC2: Primary Workload NIC3: Frontend
I have peculiar management network setup. My environment is setup such that the MGMT network where my ESXi hosts, vCenter, Supervisor MGMT, and HA Proxy MGMT all reside, does not have a route to the Workload networks. It's essentially an air gapped management network.
My Tanzu cluster setup contains a Primary, and two additional "isolated" Workload networks. Traffic that enters the HA Proxy Frontend and is destined for backends on the Primary Workload network reaches those backends fine because the HA Proxy NIC2 is directly connected.
However, the issue I run into is that when traffic enters the HA Proxy Frontend, and is forwarded to the destination backends located on the isolated Workload networks it is being sent to the Default Gateway on the management interface, and this network cannot reach the secondary workload networks. I thought by adding some values in the route-tables.cfg for the isolated workload networks I would be able to configure static routes for the Frontend network, but either this does not work the way I was thinking it would, or I am getting the syntax wrong.
In the end I was able to work around my issue by adding static routes into the Frontend network-scripts file (/etc/systemd/network/10-frontend.network).