Closed GeorgeGuo2018 closed 4 years ago
set - --ipvs-exclude-cidrs=0.0.0.0/0 can work around.But I was still wonderring why --cleanup-ipvs or --clean didnot take effect?
update.The ipvs rules maintainer by keepalived was deleted while kube-proxy was runing. So it is nothing to do with the cleanup-ipvs option. But why should kube-proxy deleted the rules of keepalived, is there any option i ignored?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Hi,there. I installed a k8s cluster with kube-proxy mode=ipvs and installed keepalived pod in my cluster. And i configure keepalived with the following vitual server and real server. virtual_server 10.6.115.166 41808 { delay_loop 5 lvs_sched wlc lvs_method NAT persistence_timeout 1800 protocol TCP
but I can only get the ipvs rule of the virtual server 10.6.115.166:41808 for no more than 2 seconds, then it is gone. Obviously, this ipvs rule which the keepalived maintained get flushed by kube-proxy.
Once I change the image version of kube-proxy from 1.18.0 to 1.15.3, everything return back ok, the ipvs rule for 10.6.115.166:41808 never get lost.
my cluster is k8s 1.18, with kube-proxy 's version 1.18.0 and keepalived's verion v2.0.20. the configuration of kube-proxy is as follows, it seems that the cleanup and cleanup-ipvs option with value false did not take effect.
Any reply would be appreciated.