Open Xinayder opened 5 months ago
Not sure if this is the right answer, but if you want to use ccm with network, meaning Hetzner Cloud Private network, it sais clear in Cloud Docs that IPV6 is not supported on it.
The CCM has no bug about this and the CCM is not doing the Hetzner Cloud Network, which actually is consumed/used by CCM.
if you want to use ccm with network, meaning Hetzner Cloud Private network, it sais clear in Cloud Docs that IPV6 is not supported on it.
I agree, the Hetzner docs mention this explicitly.
The CCM has no bug about this and the CCM is not doing the Hetzner Cloud Network, which actually is consumed/used by CCM.
Now I disagree with this. It is indeed a bug in the CCM, because if I define my cluster with dual-stack, the CCM will endlessly attempt to create a IPv6 route on the Hetzner Network via its API and constantly fail, because the fields have a strict IPv4 validation rule that doesn't work for v6 addresses. And this will prevent the node from being ready, resulting in pods having connection issues.
The way to solve this would be to attempt to create the IPv4 routes only if Hetzner Networking is enabled, whether the node is dual stack or not.
I agree that this configuration error should be surfaced in a nicer way, and that we should document the limits of HCCM better.
Sending unnecessary requests to the API is also a big problem of HCCM in general.
There is no way to get the node into a ready/healthy state with this configuration.
If you assign Pod IPs in IPv6 (through cluster-cidr
), the pods in the cluster and kube-proxy will try to send traffic to that address.
If you want to use the Hetzner Cloud Network Routes (through the route-controller) instead of the CNI to route this inter-pod traffic, we are responsible for making sure that every packet sent to an address in the cluster-cidr
is forwarded to the correct node.
As our private networks (currently) only support IPv4, we can not do that for any IPv6 packets.
If HCCM silently drops any IPv6 routes and marks the node as ready/healthy, this will cause packetloss and break network connections between (a subset of) your pods.
Do you intend to use the routes provisioned by HCCM instead of relying on your CNI? If not, you can just disable the routes-controller
I have a local IP assigned to my VPS in Hetzner and I wanted to use them for inter-server communication. My idea is to have 2 VPS running with a local IP each, and setup a 2 node k3s cluster with them.
Would it be possible to use these internal routes with the CNI? The whole reason I want a dual stack setup is to have my servers available via IPv6. I'm not sure if it's possible to have a k3s server accessible via ipv6 but using ipv4 for its local connections.
I do not know if that will work. You will have to try this yourself, I am sorry.
This issue has been marked as stale because it has not had recent activity. The bot will close the issue if no further action occurs.
TL;DR
Setting up a dual stack k8s cluster with HCCM and Hetzner's inter-server networking will result in the CCM attempting to create IPv6 routes for the internal Hetzner network.
Expected behavior
IPv6 routes should be created normally.
Observed behavior
The cloud controller keeps trying to create an IPv6 route on a v4 only network, failing with the error:
Could not create route <route-id> 2001:cafe:42::/64 for node <node_name>: hcloud/CreateRoute invalid input in field 'destination' (invalid_input)
Minimal working example
Setup k3s with:
Setup a Hetzner network with address
10.0.0.0/24
Log output
Additional information
No response