Closed bjg2 closed 5 years ago
Thank you for creating the issue! One of our team members will get back to you shortly with additional information.
Thanks for opening this issue. Closing as v1.1.1
has shipped with XFS support.
You can use XFS support by creating a storage class that looks something like:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: do-block-storage-xfs
provisioner: dobs.csi.digitalocean.com
parameters:
fsType: xfs
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
How do you expect this to be used? I get MountVolume.MountDevice failed for volume "pvc-19abb273-b773-11e9-844d-7ed2c50452c7" : rpc error: code = Internal desc = exec: "mkfs.xfs": executable file not found in $PATH
error. I even tried to delete the cluster and create it again (I guessed you changed something in the node initialization), but it gives the same error.... Please respond as fast as possible, it would be very important for us to use this feature very soon....
Hi @bjg2 — which version of the CSI implementation are you using?
I have an example using XFS volumes in https://github.com/snormore/doks-examples/tree/master/xfs but it requires v1.1.1
of this CSI implementation.
Looking further, I noticed that csi-do-node pods use csi-node-driver-registrar:v1.0.1. I tried to update it via kubectl apply -f https://raw.githubusercontent.com/digitalocean/csi-digitalocean/master/deploy/kubernetes/releases/csi-digitalocean-v1.1.1.yaml
, but I get:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io unchanged
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io unchanged
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io unchanged
volumesnapshotclass.snapshot.storage.k8s.io/do-block-storage unchanged
storageclass.storage.k8s.io/do-block-storage unchanged
statefulset.apps/csi-do-controller created
serviceaccount/csi-do-controller-sa unchanged
clusterrole.rbac.authorization.k8s.io/csi-do-provisioner-role configured
clusterrolebinding.rbac.authorization.k8s.io/csi-do-provisioner-binding unchanged
clusterrole.rbac.authorization.k8s.io/csi-do-attacher-role configured
clusterrolebinding.rbac.authorization.k8s.io/csi-do-attacher-binding unchanged
clusterrole.rbac.authorization.k8s.io/csi-do-snapshotter-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/csi-do-snapshotter-binding unchanged
daemonset.apps/csi-do-node configured
serviceaccount/csi-do-node-sa unchanged
clusterrole.rbac.authorization.k8s.io/csi-do-node-driver-registrar-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/csi-do-node-driver-registrar-binding unchanged
error: unable to recognize "https://raw.githubusercontent.com/digitalocean/csi-digitalocean/master/deploy/kubernetes/releases/csi-digitalocean-v1.1.1.yaml": no matches for kind "CSIDriver" in version "storage.k8s.io/v1beta1"
Can you please explain how this feature is expected to be used with DO kubernetes?
The csi-do-plugin
image would have to use v1.1.1
of this CSI implementation. csi-node-driver-registrar:v1.0.1
is part of the upstream CSI interface, so that should remain as-is. The example I have in https://github.com/snormore/doks-examples/tree/master/xfs is a good representation of how to use XFS volumes on DOKS.
@bjg2 Can you post your manifests? What version of Kubernetes are you running? Are you running a managed DOKS cluster or a self-managed Kubernetes cluster using the CSI?
I'm using managed DOKS v1.13.8-do.1 (that's the newest version for v1.13 kubernetes). I just deleted the cluster and created a new one, and I can do so again if needed. All the settings are default, except for my StorageClass, which is by your instructions:
{
"kind": "StorageClass",
"apiVersion": "storage.k8s.io/v1",
"provisioner": "dobs.csi.digitalocean.com",
"metadata": {
"name": "do-block-storage-xfs"
},
"parameters": {
"fsType": "xfs"
},
"reclaimPolicy": "Retain",
"volumeBindingMode": "WaitForFirstConsumer"
}
All I did was created pods that used new StorageClass (via mongodb replicaset chart - https://github.com/helm/charts/tree/master/stable/mongodb-replicaset).
Describing csi-do-node
pods, I noticed they are using csi-node-driver-registrar:v1.0.1
. Applying https://raw.githubusercontent.com/digitalocean/csi-digitalocean/master/deploy/kubernetes/releases/csi-digitalocean-v1.1.1.yaml
didn't work, as explained above.
So what exact steps should I take after creating vanilla DOKS?
My apologies for the confusion @bjg2. I see now why you're unable to get this working on your cluster. Due to Kubernetes compatibility of our CSI, we have only updated DOKS versions 1.14+ to use the new v1.1.1
of this CSI implementation. Is it possible for you to create a 1.14 cluster instead of a 1.13?
Oh, I didn't get that one. Not sure if there was a reason for using v1.13 cluster or it was just in out scripts because of the stability. I will test it out and let you know the outcome, thanks!
Moved to v1.14. Had problems with load balancer forcing http protocol, but managed to get around it manually for now. Volume is XFS formatted and everything seems to work, thanks!
How can I request XFS persistent volume in digital ocean k8s? I need it as it is recommended and faster file system for mongodb, which I host at my k8s.