haproxytech / vmware-haproxy

Apache License 2.0
52 stars 25 forks source link

Add support for DHCP to get IP/Subnet/Gateway information #17

Closed hkumar1402 closed 3 years ago

hkumar1402 commented 3 years ago

The change adds support for acquiring mgmt, frontend and workload network IP/Subnet/Gateway information using DHCP. The primary usecase for this support is to makes it easy to setup haproxy for testing/poc/lab environments.

There seems to have been some intent to support DHCP already, but not all aspects were working. Following are the high level changes.

Testing Done: Built and deployed haproxy appliance ovf keeping IP/Gateway fields for all networks blank to indicate using DHCP. Verified the ip addresses, route table entries and ip rules. root@haproxy [ ~ ]# ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: management: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:50:56:b8:3e:18 brd ff:ff:ff:ff:ff:ff inet 10.170.64.31/19 brd 10.170.95.255 scope global dynamic management valid_lft 8085sec preferred_lft 8085sec inet6 fd01:3:4:2825:0:a:0:fa9/128 scope global dynamic noprefixroute valid_lft 24068sec preferred_lft 7868sec inet6 fd01:3:4:2825:250:56ff:feb8:3e18/64 scope global dynamic mngtmpaddr noprefixroute valid_lft 2591867sec preferred_lft 604667sec inet6 fe80::250:56ff:feb8:3e18/64 scope link valid_lft forever preferred_lft forever 3: workload: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:50:56:b8:65:f2 brd ff:ff:ff:ff:ff:ff inet 192.168.128.11/16 brd 192.168.255.255 scope global dynamic workload valid_lft 343sec preferred_lft 343sec inet6 fe80::250:56ff:feb8:65f2/64 scope link valid_lft forever preferred_lft forever 4: frontend: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:50:56:b8:79:f1 brd ff:ff:ff:ff:ff:ff inet 172.16.10.8/24 brd 172.16.10.255 scope global dynamic frontend valid_lft 336sec preferred_lft 336sec inet6 fe80::250:56ff:feb8:79f1/64 scope link valid_lft forever preferred_lft forever root@haproxy [ ~ ]# root@haproxy [ ~ ]# ip route list default via 10.170.95.253 dev management proto dhcp src 10.170.64.31 metric 1024 10.170.64.0/19 dev management proto kernel scope link src 10.170.64.31 10.170.95.253 dev management proto dhcp scope link src 10.170.64.31 metric 1024 172.16.10.0/24 dev frontend proto kernel scope link src 172.16.10.8 192.168.0.0/16 dev workload proto kernel scope link src 192.168.128.11 root@haproxy [ ~ ]# root@haproxy [ ~ ]# ip rule 0: from all lookup local 32764: from 172.16.10.8/24 lookup rtctl_frontend 32765: from 192.168.128.11/16 lookup rtctl_workload 32766: from all lookup main 32767: from all lookup default root@haproxy [ ~ ]# root@haproxy [ ~ ]# ip route list table rtctl_frontend default via 172.16.10.1 dev frontend proto static 172.16.10.0/24 dev frontend proto kernel scope link src 172.16.10.8 root@haproxy [ ~ ]# ip route list table rtctl_workload default via 192.168.1.1 dev workload proto static 192.168.0.0/16 dev workload proto kernel scope link src 192.168.128.11 root@haproxy [ ~ ]#

Verified that sshd and dataplaneapi are bound to mgmt nic. root@haproxy [ ~ ]# netstat -lnp | grep ssh tcp 0 0 10.170.64.31:22 0.0.0.0: LISTEN 1012/sshd root@haproxy [ ~ ]# root@haproxy [ ~ ]# netstat -lnp | grep dataplaneapi tcp 0 0 10.170.64.31:5556 0.0.0.0: LISTEN 1113/dataplaneapi root@haproxy [ ~ ]#

Verified with basic fronend/backend configuration that proxying works as expected.