Open zazim13 opened 2 years ago
One solution could be to add for each pvc in the pvcs configuration file (deployment/vp-cloud/templates/06-pvc.yaml), the name of the corresponding pv that it should claim.
We can do it by adding the key volumeName
. And we shall obtain such file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: videoserver-videofiles
spec:
storageClassName: {{.Values.storageClassName}}
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
volumeName: foo-pv4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: videoserver-logs
spec:
storageClassName: {{.Values.storageClassName}}
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeName: foo-pv3
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: videoserver-video
spec:
storageClassName: {{.Values.storageClassName}}
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
volumeName: foo-pv2
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodb-data
spec:
storageClassName: {{.Values.storageClassName}}
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 512M
volumeName: foo-pv
Apparently foo-pv4 is bound to mongodb-data.
foo-pv4 5Gi RWO,ROX,RWX Retain Bound default/mongodb-data default 6m11s
Could we create a new persistent-volume, foo-pv5, and also bind the videoserver-videofiles PVC to foo-pv5 ? Finally note that we'd keep the requested capacity and the bound PV capacity equal
Adding the key volumeName solved the issue for me. But I don't know if it is working with other type of pv provisioning. (I am doing local provisioning)
Should I commit this change anyway ?
I don't know if adding a new pv will solve the issue at deployment as some pv could still be bound to the wrong pvc.
I have been experiencing some issues with volumes that cause pending pods. Sometimes Persistent Volume Claims(pvc) are not associated with their corresponding Persistent Volume(pv) and are pending with the following warning (from
kubectl describe pvc pvc_name
)Warning ProvisioningFailed 1s (x18 over 4m2s) persistentvolume-controller storageclass.storage.k8s.io "default" not found
The fact is that the remaining persistent volumes doesn't fits the persistent volume claim requirements. As the output of
kubectl get pvc
shows:You can see that the last pvc
videoserver-videofiles
(reclaiming at least 5Gi as shown here ) that should claim the pvfoo-pv4
(as you can see here) is pending. whilefoo-pv4
is already associated with the pvcmongodb-data
.It must be a matter of scheduling cause redeploying the cluster several times can solve the problem.
Note: Here the output of
kubectl get pv
Note2: I am using local volumes