siderolabs / terraform-provider-talos

Mozilla Public License 2.0
141 stars 18 forks source link

talos_cluster_health fails, while talosctl health is fine #153

Open stelb opened 8 months ago

stelb commented 8 months ago

Hi,

first problem: │ waiting for etcd members to be control plane nodes: etcd member ips ["10.1.0.6" "XX.75.176.68" "10.1.0.2"] are not subset of control plane node ips ["10.1.0.2" "10.1.0.6" "10.1.0.7"] I added advertisedSubnets to be the internal cidr

Now etcd is ok, but now there is an unexpected k8s node │ waiting for all k8s nodes to report: can't find expected node with IPs ["10.1.0.3"] │ waiting for all k8s nodes to report: unexpected nodes with IPs ["XX.75.176.68"] (I reduced nodes)

But when I check this with talosctl:

talosctl -n 10.1.0.3 -e xx.13.164.153 health

discovered nodes: ["10.1.0.3" "xx.75.176.68"] waiting for etcd to be healthy: ... waiting for etcd to be healthy: OK waiting for etcd members to be consistent across nodes: ... waiting for etcd members to be consistent across nodes: OK waiting for etcd members to be control plane nodes: ... waiting for etcd members to be control plane nodes: OK waiting for apid to be ready: ... waiting for apid to be ready: OK waiting for all nodes memory sizes: ... waiting for all nodes memory sizes: OK waiting for all nodes disk sizes: ... waiting for all nodes disk sizes: OK waiting for kubelet to be healthy: ... waiting for kubelet to be healthy: OK waiting for all nodes to finish boot sequence: ... waiting for all nodes to finish boot sequence: OK waiting for all k8s nodes to report: ... waiting for all k8s nodes to report: OK waiting for all k8s nodes to report ready: ... waiting for all k8s nodes to report ready: OK waiting for all control plane static pods to be running: ... waiting for all control plane static pods to be running: OK waiting for all control plane components to be ready: ... waiting for all control plane components to be ready: OK waiting for kube-proxy to report ready: ... waiting for kube-proxy to report ready: SKIP waiting for coredns to report ready: ... waiting for coredns to report ready: OK waiting for all k8s nodes to report schedulable: ... waiting for all k8s nodes to report schedulable: OK

or with public cp ip:

talosctl -n xx.13.164.153 -e xx.13.164.153 health

discovered nodes: ["10.1.0.3" "xx.75.176.68"] waiting for etcd to be healthy: ... waiting for etcd to be healthy: OK waiting for etcd members to be consistent across nodes: ... waiting for etcd members to be consistent across nodes: OK waiting for etcd members to be control plane nodes: ... waiting for etcd members to be control plane nodes: OK waiting for apid to be ready: ... waiting for apid to be ready: OK waiting for all nodes memory sizes: ... waiting for all nodes memory sizes: OK waiting for all nodes disk sizes: ... waiting for all nodes disk sizes: OK waiting for kubelet to be healthy: ... waiting for kubelet to be healthy: OK waiting for all nodes to finish boot sequence: ... waiting for all nodes to finish boot sequence: OK waiting for all k8s nodes to report: ... waiting for all k8s nodes to report: OK waiting for all k8s nodes to report ready: ... waiting for all k8s nodes to report ready: OK waiting for all control plane static pods to be running: ... waiting for all control plane static pods to be running: OK waiting for all control plane components to be ready: ... waiting for all control plane components to be ready: OK waiting for kube-proxy to report ready: ... waiting for kube-proxy to report ready: SKIP waiting for coredns to report ready: ... waiting for coredns to report ready: OK waiting for all k8s nodes to report schedulable: ... waiting for all k8s nodes to report schedulable: OK

so what is the problem?

JonasKop commented 2 months ago

I have the same issue when using vip. It works with talosctl health ....

machine:
  network:
    interfaces:
      - interface: eth0
        dhcp: true
        vip:
          ip: 10.0.2.160
spastorclovr commented 1 month ago

Almost the Same issue here. kubelet.nodeIp.validsubnets is wel defined to internal IPs and using advertisedSubnets set to internal Ips but still

the terraform data does not find the cluster healthy while the command talosctl health does.

Errror is

 unexpected nodes with IP 

followed by the list of the private ips for the worker nodes.

samos667 commented 1 week ago

Same error, the node reported unhealthy by talos_cluster_health is the remote nodes linked by kubespan that is in a different network than control-planes

image

Then talosctl with the same endpoint but targeting only 1 CP, report all nodes ok like it should: image