Open lqdflying opened 7 months ago
@lqdflying 我也遇到这个问题了,在ubuntu18没问题,升级到22.04就不行了,又解决的办法吗
@lqdflying 我也遇到这个问题了,在ubuntu18没问题,升级到22.04就不行了,又解决的办法吗
和我现象一样吗? 安装完后, ipvs不工作, coredns会i/o timeout
? 你18正常?
It may be related to the rp_filter parameter in sysctl, which can be check to see if it is equal to 2
@lqdflying 我也遇到这个问题了,在ubuntu18没问题,升级到22.04就不行了,又解决的办法吗
和我现象一样吗? 安装完后, ipvs不工作, coredns会
i/o timeout
? 你18正常?
我单节点,pod之间不通。ubuntu18没事,ubuntu22硬重启也不管用
It may be related to the rp_filter parameter in sysctl, which can be check to see if it is equal to 2
@zheng1 all rp_filter equal 2
# sysctl -a | grep rp_filter | grep -v cali | grep -v arp
net.ipv4.conf.all.rp_filter = 2
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.docker0.rp_filter = 2
net.ipv4.conf.ens3f0.rp_filter = 2
net.ipv4.conf.ens3f1.rp_filter = 2
net.ipv4.conf.ens3f2.rp_filter = 2
net.ipv4.conf.ens3f3.rp_filter = 2
net.ipv4.conf.kube-ipvs0.rp_filter = 2
net.ipv4.conf.lo.rp_filter = 2
net.ipv4.conf.nodelocaldns.rp_filter = 2
net.ipv4.conf.tunl0.rp_filter = 2
net.ipv4.conf.veth39d3770.rp_filter = 2
net.ipv4.conf.vetha30d89b.rp_filter = 2
net.ipv4.conf.vethcfe9ce6.rp_filter = 2
prefix cali* all equal 2
It may be related to the rp_filter parameter in sysctl, which can be check to see if it is equal to 2
Woo, eventually a member notice this case. So pls let me explain a little more:
Recently, after dozens of K8 with KK re-installations and numerous attempts, I found that the issue ultimately pointed to one point:
echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
sysctl -p
doesn't work. After that, the K8-with-kk installation will succeed in one go without any addtional reboot required after the KK-scripts completes the run.
I use the default docker
as the CRI and I also notice that kk would update the sysctl.conf
during it runs and add net.ipv4.ip_forward=1
to the sysctl.conf.
To be frankly speaking, I hv no idea how come all node need a explicit echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
and Reboot
. But on my side, it dose works.
Any advice/comment on it from ur side? many tks.
It may be related to the rp_filter parameter in sysctl, which can be check to see if it is equal to 2
Woo, eventually a member notice this case. So pls let me explain a little more:
Recently, after dozens of K8 with KK re-installations and numerous attempts, I found that the issue ultimately pointed to one point:
- All nodes must explicitly add
echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
- All nodes must reboot. Even
sysctl -p
doesn't work.After that, the K8-with-kk installation will succeed in one go without any addtional reboot required after the KK-scripts completes the run. I use the default
docker
as the CRI and I also notice that kk would update thesysctl.conf
during it runs and addnet.ipv4.ip_forward=1
to the sysctl.conf. To be frankly speaking, I hv no idea how come all node need a explicitecho "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
andReboot
. But on my side, it dose works. Any advice/comment on it from ur side? many tks.
thanks @lqdflying @zheng1 zheng , you saved me
but echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
and reboot is not work for me.
I add net.ipv4.conf.all.rp_filter=1
net.ipv4.conf.default.rp_filter=1
to /etc/sysctl.conf
file, and reboot, my k8s is worked
What is version of KubeKey has the issue?
v3.0.13
What is your os environment?
Debian11,Ubuntu22.04
KubeKey config file
A clear and concise description of what happend.
kk 安装完K8后,组件pod可以启动无报错,但是查询coredns会有大量错误log持续输出
新建测试pod并配置nodeport后,发现只有在pod实际运行的node上,
clusterIP/podIP/nodeport
才都通,其他节点均无法正常转发. 怀疑是ipvs转发的问题. kube-proxy无报错日志Relevant log output
master1节点ipvsadm配置如下(`但是master1节点无法正常通过clusterIP/nodepord去访问测试pod):
我的测试deployment描述文件如下: