Open Jasstkn opened 3 years ago
I'm having a similar problem. After setting up the cluster, I couldn't get logs from the worker node. Only after creating a firewall rule that allows traffic from the controller node's external IP address to
All nodes, including the Rancher server, are in the same private network:
Interestingly, kubectl reports the node's external IP als internal while the external IP seems to be missing:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master1 Ready controlplane,etcd,worker 5d15h v1.20.11 65.108.xx.xx <none> Ubuntu 20.04.3 LTS 5.4.0-84-generic docker://20.10.7
worker1 Ready worker 5d15h v1.20.11 65.108.xx.xxx <none> Ubuntu 20.04.3 LTS 5.4.0-84-generic cri-o://1.20.5
Rancher seems to get the node's IP addresses correctly though
The issue here seems to be the metrics server as well:
E0925 13:18:00.108492 1 server.go:132] unable to fully scrape metrics: unable to fully scrape metrics from node worker1: unable to fetch metrics from node worker1: Get "https://65.108.xx.xxx:10250/stats/summary?only_cpu_and_memory=true": dial tcp 65.108.xx.xxx:10250: i/o timeout
Hi!
I'm trying to investigate the problem with metrics-server. In the logs I see that it's complaining about missing internal IP:
It's working fine only for node which aren't in the Hetzner Cloud (dedicated root server). From the kubectl get nodes -o wide, I can see that this node automatically got the internal IP and other only external IPs.
Do you have any ideas what is going on and how to fix it? I attached dedicated server using the following command:
Those nodes are attached to the same private network.