Closed guilhem closed 3 days ago
hi @komer3 I added the CSI specification in PR description :)
Basically, when your node has a limit of 8 and has 8 PV mounted, it reports max_volumes_per_node
= 0, which is "no limit".
Here CSI driver seems to try to replace scheduler decision :)
I'm also adding GCP CSI code to compare: https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/blob/a28f8d39b21ca8439d64445da33ed3d54a4dc67c/pkg/gce-pd-csi-driver/node.go#L664
Is it possible that a Linode could have block storage attachments outside of those managed by the CSI driver? I don't think this is possible via CAPL provisioned machines for example.
Yes, I see in tests that is it expect 7 on empty nodes. Because we already have 1 PV already mounted.
Ah I see. Thanks for providing the context!
We don't want to completely remove the listInstanceDisk() call. Reason for that being 8gb nodes can only have 7 additional pvc attached with 1 disk for boot that comes with it (listInstanceDisk will return more than 1 if we have some pvcs attached to that node). We want to provide exact number for allowed volume attachments in that given time. If we remove that logic, we would be returning theoretical max volume attachment number which would be incorrect representation of how many volumes can be attached. This will result in more errors with scheduling I believe.
For more context on why we have this logic please check this issue: https://github.com/linode/linode-blockstorage-csi-driver/issues/182
And check the corresponding PR here: https://github.com/linode/linode-blockstorage-csi-driver/pull/184
Actually, let me correct myself here, swap disk also counts towards the total allowed disk attachment limit.
From akamai tech docs (https://techdocs.akamai.com/cloud-computing/docs/block-storage#limits-and-considerations):
A combined total of 8 storage devices can be attached to a Compute Instance at the same time, including local disks and Block Storage volumes. For example, if your Compute Instance has two main disks, root and swap, you can attach no more than 6 additional volumes to this Compute Instance.
Anyway, I think the problem we need to solve here is when we have no more space for disk attachment, we see the value being set to 0 which is may be incorrect.
Do we know an alternative value (other than 0) that indicates to the CO that we can no longer schedule workloads with pvc for that particular node (no more room for attachments)? I haven't been able to find it.
@komer3 I change myself and correct using prefix to filter PV from our driver :)
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 74.79%. Comparing base (
4ecf4c4
) to head (65e681c
).
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
🚨 Try these New Features:
when node have maximum number of pvc it report 0, which is no limit.
https://github.com/container-storage-interface/spec/blob/master/spec.md#nodegetinfo
General:
Pull Request Guidelines: