Open rask opened 4 years ago
Oh wow, nice find. Could you send a PR with the steps to reduce the CPU count? (Please also --signoff
your git commits).
FWIW, have you also tried the other suggested workarounds to pass --masquerade-all --conntrack-max=0 --conntrack-max-per-core=0
to the kube-proxy? If that does not work, have you tried this:
On the host you should be able to write to hashsize
, like (see http://blog.michali.net/2017/08/09/ipv6-support-for-docker-in-docker/)
echo "262144" > /sys/module/nf_conntrack/parameters/hashsize
In order to not forget this, you should be able to put this into a lxc.hook.pre-start
hook (see https://stgraber.org/2013/12/23/lxc-1-0-some-more-advanced-container-usage/).
Will try those workarounds, thanks! I'll try and get a PR for you soon.
See https://github.com/kubernetes/kubernetes/issues/58610
When the CPU count is large (e.g. I had it at 12 which is the amount on my host), conntrack hashsize may need to be increased when starting kube-proxy during k8s boot. The problem in LXC setups seems to be that the /sys/.../conntrack/hashsize file cannot be edited in any way inside the container, leading to failure if it needs to be altered.
My fix was to limit the CPU count on the system to 4 cores, which resulted in no wanted changes to the hashsize value.
Maybe add a note about this into the guide?