siderolabs / talos

Talos Linux is a modern Linux distribution built for Kubernetes.
https://www.talos.dev
Mozilla Public License 2.0
5.75k stars 463 forks source link

Support for dual stacked VIP #6929

Open jamesharr opened 1 year ago

jamesharr commented 1 year ago

Feature Request

Currently in 1.3, there can only be a single VIP on an interface, and in a dual stack environment, the administrator would have to choose between an IPv4 and an IPv6 address. I'd like support for both IPv4 and IPv6, or even better an arbitrary number of VIPs on an interface.

Sample DeviceVIPConfig:

vip:
 - ip: 192.0.2.100
 - ip: 2001:db8::100

Things that are not quite clear to me:

Description

jamesharr commented 1 year ago

I just thought of a use-case for multiple VIPs in a single node: Cilium Egress Gateways.

Cilium open source requires that you manage which node the egress address is on. Cilium expects the user to manage which node(s) have the given egress address assigned. Having Talos manage this as part of the standard node deployment (as an additional VIP) would simplify egress NAT support.

The current Cilium documentation has removed the example they've used in the past for how to manage which node has the egress gateway address. I've included links to previous documentation for reference.

References

jamesharr commented 1 year ago

I also thought of a half-decent way to migrate through API revisions.

# deprecated method, still supported
vip:
  ip: ...

# New method
vips:
  - ip: ...
  - ip: ...
  - equinixMetal: ...
  - hcloud: ...

I'm guessing someone on the Talos team would have thought of that anyway, but it came to mind today.

smira commented 1 year ago

Talos VIP is tied to the kube-apiserver, so it might not be great for other use cases.

E.g. it won't work on worker nodes, and it will go down if API server goes down.

There's always an option to run any other solution to manage VIPs.

jamesharr commented 1 year ago

Yeah, I also realized that after playing with it in the lab. Talos wouldn't let me put a VIP on a worker node. SI figured it wasn't really meant to be a general purpose VIP. Plus some people may want different VIPs to float between different groups of machines.

I still think that there's probably a case for multiple address families and I could see an argument for an arbitrary list of addresses. It's just solved if someone needs it for a smooth migration to a new VIP for the kube API server, or someone wanting a ULA+GUA address.

github-actions[bot] commented 4 days ago

This issue is stale because it has been open 180 days with no activity. Remove stale label or comment or this will be closed in 7 days.