Closed willzhang closed 9 months ago
same here
Kubekey uses kube-vip to deploy ha cluster, same issue found and can be solved by add node cidr into kube-proxy configuration. Refer to : kubekey-1702 Maybe we should add some instruction on kube vip's website to remind users of this bug
me too
os:Rocky Linux release 9.2 kernel:5.14.0-284.18.1.el9_2.x86_64 kubernetes:1.27.4 containerd:1.6.20 kube-vip:0.6.0
Has anyone solved this problem yet?
I'm still fuzzy on the details, but in the issue linked by @lwabish, it looks like the issue was resolved by telling the proxy service to ignore the subnet the control plane nodes live on. my guess is there is something like a race condition being generated. i added that subnet to the no_proxy
config for my kubespray deployment, but it does not seem to have made a difference after running the playbook again.
I had a similar issue, but using kubespray and metallb. The LB IP from the CP was gone, and I got the same error messages as above. Fortunately, Kubespray has a way to specify this exclusion --> https://github.com/kubernetes-sigs/kubespray/blob/747d8bb4c2d31669b2d7eed2b38bc4da2c689fab/roles/kubernetes/control-plane/defaults/main/kube-proxy.yml#L68
Correction, after applying the config changes and rerunning the kubespray playbooks, the error still occurs. I need to double check if the config made it's way through or not...
time="2024-01-12T19:37:34Z" level=error msg="Error querying backends file does not exist"
time="2024-01-12T19:37:34Z" level=info msg="Created Load-Balancer services on [10.128.5.1:6443]"
time="2024-01-12T19:37:34Z" level=info msg="Added backend for [10.128.5.1:6443] on [10.128.5.14:6443]"
time="2024-01-12T19:37:34Z" level=info msg="Added backend for [10.128.5.1:6443] on [10.128.5.13:6443]"
time="2024-01-12T19:37:39Z" level=info msg="Added backend for [10.128.5.1:6443] on [10.128.5.12:6443]"
time="2024-01-12T19:38:05Z" level=error msg="Error querying backends file does not exist"
time="2024-01-12T19:38:05Z" level=info msg="Created Load-Balancer services on [10.128.5.1:6443]"
time="2024-01-12T19:38:05Z" level=info msg="Added backend for [10.128.5.1:6443] on [10.128.5.14:6443]"
time="2024-01-12T19:38:05Z" level=info msg="Added backend for [10.128.5.1:6443] on [10.128.5.13:6443]"
time="2024-01-12T19:38:10Z" level=info msg="Added backend for [10.128.5.1:6443] on [10.128.5.12:6443]"
time="2024-01-12T19:38:30Z" level=error msg="Error querying backends file does not exist"
time="2024-01-12T19:38:30Z" level=info msg="Created Load-Balancer services on [10.128.5.1:6443]"
time="2024-01-12T19:38:30Z" level=info msg="Added backend for [10.128.5.1:6443] on [10.128.5.12:6443]"
time="2024-01-12T19:38:36Z" level=info msg="Added backend for [10.128.5.1:6443] on [10.128.5.14:6443]"
time="2024-01-12T19:38:36Z" level=info msg="Added backend for [10.128.5.1:6443] on [10.128.5.13:6443]
EDIT2: the kube-proxy configMap was not modified yet. But I doupt it is actually the issue, since there are no logs mentioning proxy deleting this IP (10.128.5.1), and I have watch -n 0 ip a show bond0
running on all three nodes, and one of them has the right IP all the time
Kubekey uses kube-vip to deploy ha cluster, same issue found and can be solved by add node cidr into kube-proxy configuration. Refer to : kubekey-1702 Maybe we should add some instruction on kube vip's website to remind users of this bug
This worked for me. My kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: "192.168.2.160" #control plane node local ip
bindPort: 6443
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
imageRepository: registry.k8s.io
kubernetesVersion: 1.28.2
kubeProxyArgs: ["--ipvs-exclude-cidrs=192.168.2.0/24"] ###### cidr of node network #######
controlPlaneEndpoint: "192.168.2.159:6443" # loadbalancer VIP
networking:
serviceSubnet: 10.96.0.0/12
podSubnet: "10.32.0.0/12"
apiServer:
timeoutForControlPlane: 4m0s
certSANs:
- "master01"
- "master02"
- "192.168.2.160"
- "192.168.2.161"
- "192.168.2.159"
- "127.0.0.1"
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
serverCertSANs:
- "master01"
- "master02"
- "192.168.2.160"
- "192.168.2.161"
- "192.168.2.159"
- "127.0.0.1"
peerCertSANs:
- "master01"
- "master02"
- "192.168.2.160"
- "192.168.2.161"
- "192.168.2.159"
- "127.0.0.1"
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: "systemd"
#cgroupDriver: cgroupfs
Yeah we should better add this to the doc
This is now part of the documentation.
Describe the bug control plane load balancing does not work
To Reproduce Steps to reproduce the behavior:
init cluster with
Expected behavior control plane load balancing with ipvs and vip.
Screenshots If applicable, add screenshots to help explain your problem.
Environment (please complete the following information):
Kube-vip.yaml
:Additional context
can not see ipvs loadbalaning with vip 192.168.72.200
kube-vip pod logs