Open samuelluohaoen1 opened 1 year ago
No new CSIStorageCapacity object is created.
How do you check for this? With kubectl get csistoragecapacities
or kubectl get --all-namespaces csistoragecapacities
?
CSIStorageCapacity objects are namespaced, so the second command has to be used.
I tried to reproduce the issue with csi-driver-host-path v1.10.0, but there I get new CSIStorageCapacity objects after creating a storage class.
My commands:
/deploy/kubernetes-distributed/deploy.sh
kubectl delete storageclass.storage.k8s.io/csi-hostpath-slow
kubectl delete storageclass.storage.k8s.io/csi-hostpath-fast
kubectl get --all-namespaces csistoragecapacity
kubectl create -f deploy/kubernetes-distributed/hostpath/csi-hostpath-storageclass-fast.yaml
kubectl get --all-namespaces csistoragecapacity
csi-provisioner:v3.3.0
No new CSIStorageCapacity object is created.
How do you check for this? With
kubectl get csistoragecapacities
orkubectl get --all-namespaces csistoragecapacities
?CSIStorageCapacity objects are namespaced, so the second command has to be used.
I tried to reproduce the issue with csi-driver-host-path v1.10.0, but there I get new CSIStorageCapacity objects after creating a storage class.
Yes it is indeed namespaced. My kubectl has the default namespace set to the namespace where the CSI plugins are deployed.
My commands:
/deploy/kubernetes-distributed/deploy.sh kubectl delete storageclass.storage.k8s.io/csi-hostpath-slow kubectl delete storageclass.storage.k8s.io/csi-hostpath-fast kubectl get --all-namespaces csistoragecapacity kubectl create -f deploy/kubernetes-distributed/hostpath/csi-hostpath-storageclass-fast.yaml kubectl get --all-namespaces csistoragecapacity
From the sequence of your commands I do not see how the controller plugin is deployment before the node plugins. I think the order of the deployment may be crucial to reproducing this issue. Could you make sure that step 2 happens before node plugins are deployed? Thank you for your trouble.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/reopen /assign
@pohly: Reopened this issue.
@samuelluohaoen1: it looks like you are using a central controller for your CSI driver. Is that correct?
Can you perhaps share the external-provisioner log at level >= 5? The is code which should react to changes in the node and CSIDriver objects when the node plugin gets registered after the controller has started.
We don't have a CSI driver deployment readily available to test this scenario. I tried reproducing it through unit tests (see https://github.com/kubernetes-csi/external-provisioner/pull/942) but the code worked as expected.
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
@pohly We observed something similar but the CSIStorageCapacity
objects were created after about an hour.
I believe there is currently an issue where the capacity controller is tracking duplicated workqueue entries. See issue https://github.com/kubernetes-csi/external-provisioner/issues/1161
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
What happened: After new node plugins join the cluster and report new AccessibleTopologies.Segments, the current segment information is not getting updated. New CSIStorageCapacity objects are not being created.
What you expected to happen: New node plugins reporting new values for existing topology segments should in a sense "expand" the value sets of existing topology segments. Which in turn should result in CSIStorageCapacity objects being created for new accessible segments.
How to reproduce it:
com.foo.bar
. Check thatSTORAGECAPACITY
is true.DRIVERS: 0
.NodeGetInfo
RPC to be called. The RPC should return something likeDRIVERS: 1
which is namedcom.foo.bar
withNode ID: some-node
andTopology Keys: [kubernetes.io/hostname]
.volumeBindingMode: WaitForFirstConsumer
andprovisioner: com.foo.bar
.Anything else we need to know?: I am using the "kubernetes.io/hostname" label as the only key because we want topology to be constraint by each node. Each PV is to be provisioned locally on some node. I also assumed that "kubernetes.io/hostname" is unique across the nodes and should by default exist on every node (I hope this is a reasonable assumption).
Environment:
kubectl version
): 1.25+uname -a
): Linux 4.18.0@pohly