Closed zwForrest closed 1 month ago
In the waitforfirstconsumer mode, lvm information can participate in the filter and score phases of pod scheduling, so that the best node can be selected to start the pod
@zwForrest LVM-LocalPV already has k8s StorageCapacity feature where the lvm information is shared with the k8s scheduler and it pick the node based on score that driver shares. See this https://kubernetes-csi.github.io/docs/storage-capacity-tracking.html.
@pawanpraka1 With storage capacity tracking can only guarantee that the node has enough capacity. But the selected node may not be the most suitable node. This feature does not score nodes based on resource capacity. See this https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1472-storage-capacity-tracking#drawbacks
@zwForrest How about additionally using the allowed topologies in storage class to filter the suitable nodes first?
There are different scheduling criteria that are being evaluated during the volume placement. Refer: https://openebs.io/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-configuration#lvm-supported-storageclass-parameters
Describe the problem/challenge you have [A description of the current limitation/problem/challenge that you are experiencing.] In the case of waitforfirstconsumer, the pod will be scheduled by default scheduler, then the node will be selected by scheduler, but lvm did not participate in default scheduling. This means that the node selected by lvm and the one selected by kubernetes scheduler may not be the same.
Describe the solution you'd like [A clear and concise description of what you want to happen.] In the waitforfirstconsumer mode, lvm information can participate in the filter and score phases of pod scheduling, so that the best node can be selected to start the pod
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version
):/etc/os-release
):