The mechanism for IPV6 is a simple NAT performed by HAProxy in order to route traffic into the v4 pod network. The steps are:
Add ipv6 address found in config6 map to realserver interface.
Create an haproxy config for each address + port pair that is relevant for this realserver node (has pods for that service on node, in other words). This config routes v6 traffic directly to pod backends
Start the service. Watch kubernetes and upon updates to relevant backends, change the config and reload it so we don't route to dead pods. This process is well documented by haproxy and is done by sending a SIGHUP to the haproxy server process
There is a 1:1 relationship between v6 + port pair and haproxy process. This is because an update to a set of backend pods causes an update to the config, and a momentary drop in traffic while the process reloads. This means updates to other namespaces can cause a drop to a different namespace in a different config. If all configs were shared, update churn could potentially cause lots of traffic loss.
Changes to the endpoints endpoint can be caused by: changing number of pods in a deployment, rc, daemonset, etc, modifying their labels, and updating, creating or deleting a service...
For the director, the workflow is essentially the same. Get a set of backend nodes and route v6 addresses found in config6 to the v6 address of the node, with the caveat that the node-addr-v6 flag is set on the node. This will not be necessary in later kube versions.
Initial support for ipv6 addresses.
The mechanism for IPV6 is a simple NAT performed by HAProxy in order to route traffic into the v4 pod network. The steps are:
config6
map to realserver interface.SIGHUP
to the haproxy server processThere is a 1:1 relationship between v6 + port pair and haproxy process. This is because an update to a set of backend pods causes an update to the config, and a momentary drop in traffic while the process reloads. This means updates to other namespaces can cause a drop to a different namespace in a different config. If all configs were shared, update churn could potentially cause lots of traffic loss.
Changes to the
endpoints
endpoint can be caused by: changing number of pods in a deployment, rc, daemonset, etc, modifying their labels, and updating, creating or deleting a service...For the director, the workflow is essentially the same. Get a set of backend nodes and route v6 addresses found in
config6
to the v6 address of the node, with the caveat that thenode-addr-v6
flag is set on the node. This will not be necessary in later kube versions.