Open ader1990 opened 2 months ago
When deploying K8S with a large set of services, including especially Rook/Ceph and Kubevirt on the same cluster, containers that heavily use inotify start erroring out.
inotify
Nowdays, the Linux kernel sets the max_user_watches max number to 1048576, and in the [8192, 1048576] range according to the RAM size. See: https://github.com/torvalds/linux/commit/92890123749bafc317bbfacbe0a62ce08d78efb7
It would be nice to document this behaviour and suggest setting a bigger value in case of large K8S deployment (with an Ignition Butane example).
cat /proc/sys/fs/inotify/max_user_instances cat /proc/sys/fs/inotify/max_user_watches
See: https://www.suse.com/support/kb/doc/?id=000020048
@ader1990 how about we document this and ship a default sysctl setting that is good enough for a decently sized cluster?
Description
When deploying K8S with a large set of services, including especially Rook/Ceph and Kubevirt on the same cluster, containers that heavily use
inotify
start erroring out.Nowdays, the Linux kernel sets the max_user_watches max number to 1048576, and in the [8192, 1048576] range according to the RAM size. See: https://github.com/torvalds/linux/commit/92890123749bafc317bbfacbe0a62ce08d78efb7
It would be nice to document this behaviour and suggest setting a bigger value in case of large K8S deployment (with an Ignition Butane example).
See: https://www.suse.com/support/kb/doc/?id=000020048