Describe the solution you'd like
Allow users to specify the maximum number of volumes per node.
Describe alternatives you've considered
Spread constraints is the only other solution to manipulate scheduling of volumes, but works based on topology features, like zones, and cannot guarantee the maximum number of volumes per node.
Kubernetes will respect this limit as long the CSI driver advertises it. To support volume limits in a CSI driver, the plugin must fill in max_volumes_per_node in NodeGetInfoResponse.
It is recommended that CSI drivers allow for customization of volume limits. That way cluster administrators can distribute the limits of the same storage backends (e.g. iSCSI) accross different drivers, according to their individual needs.
Defaulting max_volumes_per_node to 0 should maintain the current controller behaviour and users could simply use a flag in cases where they need to change that value. It should really be down to the cluster administrators to configure the maximum volumes allowed per CSI driver on a node based on their workloads needs and their hardware.
Describe the solution you'd like Allow users to specify the maximum number of volumes per node.
Describe alternatives you've considered Spread constraints is the only other solution to manipulate scheduling of volumes, but works based on topology features, like zones, and cannot guarantee the maximum number of volumes per node.
Additional context This will be in line with Kubernetes CSI docs: https://kubernetes-csi.github.io/docs/volume-limits.html and will be following best practices.
Defaulting
max_volumes_per_node
to0
should maintain the current controller behaviour and users could simply use a flag in cases where they need to change that value. It should really be down to the cluster administrators to configure the maximum volumes allowed per CSI driver on a node based on their workloads needs and their hardware.We've raised a similar issue in the past: https://github.com/NetApp/trident/issues/710