Open chenmiao1991 opened 6 months ago
juicefs-app1
and juicefs-app2
run on the same node or different nodes? If different nodes, it's very likely that the issue caused by the accessModes RWO.
juicefs-app1
andjuicefs-app2
run on the same node or different nodes? If different nodes, it's very likely that the issue caused by the accessModes RWO.
@showjason run on the different nodes. How to solve the use of block device PVC, I see examples are all cloud vendor block devices.
@chenmiao1991 as I know, ceph rbd doesn't support RWX, but support ROX. Maybe, dedicated-cache-cluster is one way to address your issue. Or you can use the NFS instead of block storage. @zxh326 sorry, do you have any better ideas?
@showjason Using the juicefs distributed file system is to replace file systems like nfs, which brings us back again.
@showjason @zxh326 may juicefs-csi-node DaemonSet can automatically mount their own rbd as cache, and other applications can share the rbd cache. Do not use the hostpath mode, as it is inconvenient for batch creation and deletion of rbd.
What happened:
when I try use-pvc-as-cache-path, the second mount pod can not Running.
What you expected to happen:
Each JuiceFS app mount pod with its own RBD cache block and is currently running.
How to reproduce it (as minimally and precisely as possible):
Init:0/1
state with the Warning:Anything else we need to know?
How to implement each JuiceFS app mount pod with its own RBD cache block?
Any suggestions ?
Environment:
JuiceFS CSI Driver version (which image tag did your CSI Driver use): v0.18.1
Kubernetes version (e.g.
kubectl version
): v1.23.13Object storage (cloud provider and region): ceph 14
Metadata engine info (version, cloud provider managed or self maintained): self maintained。
Network connectivity (JuiceFS to metadata engine, JuiceFS to object storage):
Others: