Closed carpepraedam closed 1 month ago
etcd-arg: - "initial-cluster=etcd1=https://kube-svc-m1.domain.net:2380,etcd2=https://kube-svc-m2.domain.net:2380,etcd3=https://kube-svc-m3.domain.net:2380" - "initial-advertise-peer-urls=https://kube-svc-m1.domain.net:2380" - "listen-peer-urls=https://0.0.0.0:2380" - "listen-client-urls=https://0.0.0.0:2379" - "advertise-client-urls=https://kube-svc-m1.domain.net:2379"
Don't do that. Etcd cluster membership and advertised addresses are managed by RKE2 and you should not attempt to override these CLI args. If you want to manage your own etcd cluster, you should do so using standalone etcd, installed as a systemd service.
I would expect RKE2 configuration to be able to support ETCD over FQDN as this is a fully supported feature of ETCD itself.
We intentionally use node internal IP addresses instead of DNS names for cluster endpoints to ensure that the cluster works reliably without requiring users to have functional DNS for their node addresses. We are not currently planning on supporting use of hostnames or external IPs for managed cluster member endpoints.
Environmental Info: RKE2 Version: v1.30.4+rke2r1
Node(s) CPU architecture, OS, and Version: inux kube-svc-m1 6.8.0-45-generic #45-Ubuntu SMP PREEMPT_DYNAMIC Fri Aug 30 12:02:04 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Cluster Configuration: 3 masters, 3 workers. Cluster exists on a network where static IPs are not available. All nodes get IPs via DHCP. Nodes will receive new IPs regularly due to patching and various reboots.
Describe the bug: ETCD in RKE2 does not work with fully qualified domain names. This causes the control plane to break when control plane members have different IPs than when the cluster was originally configured. A cluster reset procedure fixes this, however I would expect RKE2 configuration to be able to support ETCD over FQDN as this is a fully supported feature of ETCD itself.
Steps To Reproduce:
Create the rke2 config file:
Install and enable rke2-server
Server enters state where it is waiting for etcd. Check etcd logs
Noticing that there is a etcd member "kube-svc-m1-15686fed" that is not recognized by etcd, we check the etc configuration
Root cause is that the pod manifest for etcd.yaml does not seem to respect the etcd-arg.initial-advertise-peer-url. This causes the etcd pod to fail because kube-svc-m1-15686fed is not a valid member defined in the etcd-arg parameters. Manually changing this does not work as the file is recreated everytime rke2 restarts
Expected behavior: I expect to be able to set etcd-args so that etcd members can talk over FQDN which avoid the control plane failures if master nodes change IP addresses. While static IPs are nice, there is no reason RKE2 can not support this feature which ETCD already does.
Actual behavior: RKE2 generates etcd members and IP addresses that are not in accordance with the /etc/rancher/rke2/config.yaml file.
Additional context / logs: