Open Cloud-Mak opened 3 weeks ago
Update - there is already PR merged for ignoring this /sys/module/nf_conntrack/parameters/hashsize. So not sure how this is till an issue
One more PR - https://github.com/kubernetes/kubernetes/pull/19303
I tried patching kube-proxy ds in kube-system. Just added --conntrack-max-per-core=0 to container of kube-proxy ds. After this the k-proxy pod went int error state. can't see any logs from same pod now. strange
Addtional info - Linux does not support writing to /sys/module/nf_conntrack/parameters/hashsize when the writer process is not in the initial network namespace (https://github.com/torvalds/linux/blob/v4.10/net/netfilter/nf_conntrack_core.c#L1795-L1796).
Usually that's fine. But in some configurations such as (with https://github.com/kinvolk/kubeadm-nspawn) or in current LXD container kube-proxy is in another netns)
Therefore, check if writing in hashsize is necessary and skip the writing if not.
Referenes - https://github.com/kinvolk/kube-spawn/issues/14 https://github.com/kubernetes/kubernetes/issues/90083
Hi Venkat @justmeandopensource
https://github.com/justmeandopensource/kubernetes/tree/master/lxd-provisioning
Looks like issue with this method now. the kube-proxy remains into crashloop and eventually error
I already set value - sysctl -w net.netfilter.nf_conntrack_max=786432. Its not helping at all.
More info - Host ubu 20.04.6 LTS LXD - stable 5.21 I am setting up 1st master node which is running inside ubuntu 22 lxd container.
I am getting node in ready status (I have already deployed flannel network)