democratic-csi / democratic-csi

csi storage for container orchestration systems
MIT License
910 stars 81 forks source link

zfs-local (non ephemereal) a possibility (possibly a feature request) #148

Open DanieleVistalli opened 2 years ago

DanieleVistalli commented 2 years ago

Hi, I would like to investigate if democratic could support ZOL datasets exposted to pods in filesystem mode.

My use case is that I want to provision node-bound PVs (as my application supports application based replication) and I would like to have the actual ZFS dataset mounted to the POD (my need is for RWO volumes, but being node-local RWX should just be possible for free).

Being plain ZFS dataset I would like to be able to use dataset snapshots directly. I don't need (and don't want) to put another filesystem on top of PVs provisioned this way.

Is this something that can be considered ?

travisghansen commented 2 years ago

It’s been a while since I looked into it but last I recall it was not possible with pure/standard csi/k8s semantics and required something like crds to fill in the gaps.

There is a project that does exactly that however: https://github.com/openebs/zfs-localpv

Is there something with that project which doesn’t meet your needs?

DanieleVistalli commented 2 years ago

Thank you for the prompt response. I know OpenEBS solution but my intention would be that of picking a single product to cover it all. That's just it. Btw I will perform a side by side test of Democratic and zfs-localpv to see if I can have what I need mixing the two.

Tanks for the great job. I'm anyway going to use Democratic for it's current use cases.

I will try to learn more about CSI semantics to figure out if there's some new way to achieve what I asked for (I'm not good enought for developing but if we find a way we might be able to hire somebody to implement and go for a PR to add it)

travisghansen commented 2 years ago

It has been discussed with the k8s team (and there may even be some PRs floating around). The predominant issue is enforcing that k8s invokes ‘controller’ operations on the node the pod is scheduled to. In the last year or so some additional features have been added which make it closer to feasible: https://github.com/kubernetes-csi/external-provisioner#distributed-provisioning

I’ll need to revisit but if the k8s pieces have all come together then the answer to the original request is a resounding yes (and it would be quite easy to implement).

DanieleVistalli commented 2 years ago

Thanks for the additional feedback. I now understand the challenge you described above (locality of execution). While everybody wants RWX storage that's anywhere I'm asking for "strictly local" things.

Should I keep this open or just close it ?

travisghansen commented 2 years ago

Understood entirely. The bits may all be there at this point I’m not entirely sure. It’s been a while since I visited the matter honestly.

Let’s leave it open and see what the current state is. I’ll ask around on csi slack and see what the current state of the ecosystem is.

Unrelated but what storage system do you use for ‘remote’ volume use-cases?

DanieleVistalli commented 2 years ago

For "remote" use case we use FreeNas / Truenas and we are pretty happy with it, this is the reason I came looking into democratic-csi and we will definitely move to it.

For one of our application stacks NFS is not good enogh and iSCSI proved to not be good enough also.

This stack is actually a database and can perform application level replication (so we are happy with RWO and let the app-stack sync amongs it's nodes and underlying PVs).

In this case having local (direct attach) ZFS datasets is perfect as we have normal unix filesystem calls (no NFS locking issues). Also the dataset that each pod address is made of millions of files (our largest env so far is addressing 15 Tb of data from a single pod). This same workload proved to fail / perform poorly over NFS to a remote FreeNAS.

We currently run this locally to ZOL datasets, we provision those in a semi-automated way (but not k8s integrated) and we use local path PVs pointing straight to the ZOL dataset mount point. So from a performance and usability point of view this is perfect for what we do. From an automation ( k8s ) point of view this is like a #fail.

Being able to provision local datasets in a k8s driven way would allow us to move from manual storage provisioning to a storage-class driven design.

travisghansen commented 2 years ago

Yeah! I love it! Understood on all the above.

I’ll poke around at the current state of things and see where it’s at. It may have moved further than I’m aware of.

travisghansen commented 2 years ago

For reference this is the blog post about the work that has been done: https://kubernetes.io/blog/2021/04/14/local-storage-features-go-beta/

travisghansen commented 2 years ago

OK, I'm still not entirely sure about the k8s tooling, but the driver itself has been reworked/updated to support local volumes. I'll get everything committed in the next day or 2 and then start testing actual deployment in k8s.

The great thing is it heavily leverages all the work already done for the other zfs drivers so the code paths are well-tested and unified across the board. Some minor changes were made to the node/mounting/etc side of the equation to support local zfs volumes but that was minor.

Of note (this can be changed), in order to make it work seamlessly with the rest of the node tooling I've decided to force zfs set mountpoint=legacy <dataset> on the datasets. This means a couple things:

If you have strong feelings about the above please shout out :)

DanieleVistalli commented 2 years ago

First of all, thank you for your quick and detailed work.

With regard to mounting as "legacy" I have no strong feelings right now. Ideally NOBODY should mess around with managed datasets so I cannot say I don't like it (and I'm not knowledgeable enogh on ZFS to figure out other issues).

I also see this should not prevent any work in the future to support ZFS snapshots (and that is what I really care about).

Having the volume not mounted is even better.

Going crazy down the path of security we could imagine we use zfs allow/disallow to only let the driver user operate on the dataset used as the parent/root for the provisioning (maybe I'm being carried away too much).

I love that when no pod is using the dataset it is just not mounted.

So my feedback is .. go for it. What you described above makes complete sense to me.

travisghansen commented 2 years ago

Sounds good. csi conformance test suite is passing for both zvol- and dataset-based drivers locally, so I suspect a commit soon enough.

DanieleVistalli commented 2 years ago

Oh, just an extra question.

Do you see/expect issues using democratic-csi on an OpenSuse (LEAP, Kernel 5.3) + ZFS 2.1 host ?

travisghansen commented 2 years ago

No I wouldn't expect issues there. You definitely will need to use the host utils (zfs binary etc will not be install in the container so when invoked it will chroot to the host). However, you're welcome to spin up a test machine and run the csi suite to ensure everything looks good.

DanieleVistalli commented 2 years ago

It's on our list. Will test on Suse this week.

travisghansen commented 2 years ago

OK, hopefully I have code and CI etc all in place shortly.

travisghansen commented 2 years ago

https://github.com/democratic-csi/democratic-csi/actions/runs/1780197433 first try!

travisghansen commented 2 years ago

OK, now tested end-to-end in a shiny new cluster where the nodes support zfs. I'll commit a new chart version to support the changes along with sample values.yaml files and then you should be good to do some testing in a cluster.

The k8s bits still don't support resizing (not sure if/when this will land) and snapshotting (snapshotting appears to be done, but not in a release yet).

DanieleVistalli commented 2 years ago

Great, we will test asap with the new chart on our test cluster and the promote to others !!! Thanks a lot

travisghansen commented 2 years ago

OK, chart details here: https://github.com/democratic-csi/charts/commit/265f46d3546db67bd2ba06381dd9c7f8a9e41b55

The example values.yaml will not work 100% right now because I haven't merged to master so you would have to override the images for the csi driver to use the next tag. Something like this added to the sample values.yaml files..

node:
  driver:
    image: democraticcsi/democratic-csi:next
    imagePullPolicy: Always
    logLevel: debug

Also for the sample config files you need to use the next branch here as well: https://github.com/democratic-csi/democratic-csi/tree/next/examples

Here's a full sample (with noise) that I used for testing locally:

# driver only works with 1.16+
csiDriver:
  # should be globally unique for a given cluster
  name: "org.democratic-csi.zfs-local-dataset"
  storageCapacity: true

storageClasses:
- name: zfs-local-dataset
  defaultClass: false
  reclaimPolicy: Delete
  volumeBindingMode: WaitForFirstConsumer
  # distributed support is not yet ready
  allowVolumeExpansion: false
  parameters:
    fsType: zfs

  mountOptions: []
  secrets:
    provisioner-secret:
    controller-publish-secret:
    node-stage-secret:
    node-publish-secret:
    controller-expand-secret:

# if your cluster supports snapshots you may enable below
volumeSnapshotClasses: []
#- name: nfs-client
#  secrets:
#    snapshotter-secret:

controller:
  enabled: true
  strategy: node

  externalProvisioner:
    extraArgs:
    - --leader-election=false
    - --node-deployment=true
    - --node-deployment-immediate-binding=false
    - --feature-gates=Topology=true
    - --strict-topology=true
    - --enable-capacity=true
    - --capacity-ownerref-level=1

  # distributed support is not yet ready
  externalResizer:
    enabled: false
  # distributed support is not yet ready
  externalSnapshotter:
    enabled: false
    extraArgs:
    - --node-deployment
    # snapshot controller option
    #- --enable-distributed-snapshotting=true

  driver:
    #image: democraticcsi/democratic-csi:latest
    image: democraticcsi/democratic-csi:next
    #image: democraticcsi/democratic-csi:v1.4.4
    imagePullPolicy: Always
    logLevel: debug

node:
  driver:
    #image: democraticcsi/democratic-csi:latest
    image: democraticcsi/democratic-csi:next
    #image: democraticcsi/democratic-csi:v1.4.4
    imagePullPolicy: Always
    logLevel: debug

driver:
  config:
    #driver: 
    # rest of per-driver config data/syntax
    driver: zfs-local-dataset

    zfs:
      datasetParentName: tank/k8s/local/dataset/v
      detachedSnapshotsDatasetParentName: tank/k8s/local/dataset/s

      datasetEnableQuotas: true
      datasetEnableReservation: false
Venthe commented 2 years ago

That's actually an amazing job, and it's precisely what I need when I need; so thank you!

My only question is - is it possible to even add ReadWriteMany capability? As of now, my PVC's with this requirement are not being provisioned. Unfortunately, gerrit helm chart is really adamant about having them RWX... :)

travisghansen commented 2 years ago

Thanks! That’s counter-intuitive for sure but I don’t think it would be difficult to allow managing that via the driver config…it probably doesn’t make much sense as a default however.

DanieleVistalli commented 2 years ago

I would have asked for the same (RWX on local volumes) I have the exact same need that's reported. I am having my test cluster updates to 1.21 so this week I finally test your work and report back. Thank you !!!

travisghansen commented 2 years ago

https://github.com/democratic-csi/democratic-csi/commit/4dd57c13bd72a593e3d02afd11b3848c28ee61f1

Wait for the build to finish here: https://github.com/democratic-csi/democratic-csi/actions/runs/1816103561 then update your containers to the current next image

In the driver config block simply set the csi block as below (it will work with zvol or dataset driver):

driver: zfs-local-dataset

csi:
  access_modes:
  - UNKNOWN
  - SINGLE_NODE_WRITER
  - SINGLE_NODE_SINGLE_WRITER
  - SINGLE_NODE_MULTI_WRITER
  - SINGLE_NODE_READER_ONLY
  - MULTI_NODE_READER_ONLY
  - MULTI_NODE_SINGLE_WRITER
  - MULTI_NODE_MULTI_WRITER

zfs:
  datasetParentName: ...
  ...

EDIT: I may make that the default...it seems strange but shouldn't be harmful and will likely result in the most seamless user experience. I'll think on it.

Venthe commented 2 years ago

@travisghansen well, I have now been spoiled. You just can't image how much fighting I just had with NFS, resigning to testing iSCSI, noticing this thread providing me an answer that was literally made A DAY after I originally had my problems... And with my request being full filled in hours. Thank you VERY, VERY much :)

PVC's are being provisioned. Initial impressions are however negative; after I've upgraded (not a fresh install) the CSI, I have lost the mounts on the ZFS side with data nowhere to be found. NAS is reporting their existence... image but they are gone on the side of Linux image

Logs for the kubectl get pods -A | grep zfs | awk '{print $2}' | xargs kubectl logs --namespace=democratic-csi attached, in the meantime I'm nuking the CSI - my data is not yet important.

csi-driver.log driver-registrar.log external-provisioner.log

Venthe commented 2 years ago
  1. Nuking CSI did nothing
  2. Rebooting server did nothing
  3. umounting PV's & nuking the CSI pods did nothing.

After recreating the CSI pods no mounts are actually created,

so I'd suggest regression even. datasets are created, but not populated

travisghansen commented 2 years ago

Are you suggesting volumes quit working for the iscsi/nfs drivers? Or it’s a regression of just the zfs-local driver?

I would expect the datasets to not appear (see discussion above about legacy mount property). The data will only be mounted if a pod or pods is actually actively using it and even then it won’t be mounted in the ‘normal’ path but rather in /var/lib/kubelet somewhere. You will have to run the mount command to see if it is currently mounted or not.

Venthe commented 2 years ago

Are you suggesting volumes quit working for the iscsi/nfs drivers? Or it’s a regression of just the zfs-local driver?

ZFS local

I would expect the datasets to not appear (see discussion above about legacy mount property). The data will only be mounted if a pod or pods is actually actively using it and even then it won’t be mounted in the ‘normal’ path but rather in /var/lib/kubelet somewhere. You will have to run the mount command to see if it is currently mounted or not.

Actually, pods have no access to data, that's why I've checked the dataset. So tell me if I understand correctly - with this legacy mounting, the data will not reside in any of the datasets? Frankly, It's a bit confusing to me.

Anyway, the directory is not mounted to the pod though PV's are bound.

➜  ~ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                             STORAGECLASS        REASON   AGE
pvc-2e62d68e-707a-4016-a6ea-bab0500c9428   5Gi        RWX            Delete           Bound    gerrit/gerrit-git-repositories-pvc                zfs-local-dataset            6h40m
pvc-8adee919-879e-489b-94f0-fd2ac05ac0a1   8Gi        RWO            Delete           Bound    external-dns/data-etcd-0                          zfs-local-dataset            6h31m
pvc-cafc630d-9c27-4d18-bdb1-b280d2f69699   10Gi       RWO            Delete           Bound    gerrit/gerrit-site-gerrit-gerrit-stateful-set-0   zfs-local-dataset            6h40m
➜  ~ mount | grep pvc-8adee919-879e-489b-94f0-fd2ac05ac0a1
➜  ~ find / 2>/dev/null | grep pvc-8adee919-879e-489b-94f0-fd2ac05ac0a1
/var/db/system/rrd-6dbe6d2efc1443dd8e16d94d55b0075f/localhost/df-var-lib-kubelet-pods-92df8aac-0433-4e1d-a11f-42a70bf8c2c6-volumes-kubernetes.io~csi-pvc-8adee919-879e-489b-94f0-fd2ac05ac0a1-mo
/var/db/system/rrd-6dbe6d2efc1443dd8e16d94d55b0075f/localhost/df-var-lib-kubelet-pods-92df8aac-0433-4e1d-a11f-42a70bf8c2c6-volumes-kubernetes.io~csi-pvc-8adee919-879e-489b-94f0-fd2ac05ac0a1-mo/df_complex-reserved.rrd
/var/db/system/rrd-6dbe6d2efc1443dd8e16d94d55b0075f/localhost/df-var-lib-kubelet-pods-92df8aac-0433-4e1d-a11f-42a70bf8c2c6-volumes-kubernetes.io~csi-pvc-8adee919-879e-489b-94f0-fd2ac05ac0a1-mo/df_complex-free.rrd
/var/db/system/rrd-6dbe6d2efc1443dd8e16d94d55b0075f/localhost/df-var-lib-kubelet-pods-92df8aac-0433-4e1d-a11f-42a70bf8c2c6-volumes-kubernetes.io~csi-pvc-8adee919-879e-489b-94f0-fd2ac05ac0a1-mo/df_complex-used.rrd
/var/db/system/rrd-6dbe6d2efc1443dd8e16d94d55b0075f/localhost/df-var-lib-kubelet-plugins-kubernetes.io-csi-pv-pvc-8adee919-879e-489b-94f0-fd2ac05ac0a1-globalmount
/var/db/system/rrd-6dbe6d2efc1443dd8e16d94d55b0075f/localhost/df-var-lib-kubelet-plugins-kubernetes.io-csi-pv-pvc-8adee919-879e-489b-94f0-fd2ac05ac0a1-globalmount/df_complex-used.rrd
/var/db/system/rrd-6dbe6d2efc1443dd8e16d94d55b0075f/localhost/df-var-lib-kubelet-plugins-kubernetes.io-csi-pv-pvc-8adee919-879e-489b-94f0-fd2ac05ac0a1-globalmount/df_complex-free.rrd
/var/db/system/rrd-6dbe6d2efc1443dd8e16d94d55b0075f/localhost/df-var-lib-kubelet-plugins-kubernetes.io-csi-pv-pvc-8adee919-879e-489b-94f0-fd2ac05ac0a1-globalmount/df_complex-reserved.rrd
➜  ~

Edit: Just to verify:

If you are saying that I shouldn't have any changes to the dataset, so maybe the problem lies in https://github.com/Venthe/Personal-Development-Pipeline/blob/861d77aa85d7c359b3989a01f94db66ff27b2ca4/provisioning/cluster_vagrant/core/1.7_csi-democratic-csi-zfs-local/zfs-local-dataset.values.yaml#L82 ?

travisghansen commented 2 years ago

I know it seems confusing but it's really not. By default zfs will mount dataset in a directory hierarchy that matches the zfs hierarchy, but this can be disabled (mountpoint=legacy). When mountpoint=legacy is operative the volume not only isn't mounted in the 'normal' location but also it makes it possible to mount the dataset using the standard mount tools on the node:

# from your logs
executing mount command: mount -t zfs -o defaults main/k8s/zfs-local/volumes/pvc-9debad5d-f4d3-4100-b0cb-ff146f64bec9 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9debad5d-f4d3-4100-b0cb-ff146f64bec9/globalmount

So the data is indeed in your dataset, it's just not mounted/accessible where you normally would access it.

Is this running on the k3s cluster on SCALE? I'm wondering if SCALE has a non-standard kubelet path and that's why it's not actually accessible to the pod? Your mount commands should certainly show results with type zfs that match the pvc/pv name as well.

travisghansen commented 2 years ago

kubelet path seems to be the issue..

https://github.com/k3s-io/k3s/issues/840 https://github.com/democratic-csi/charts/blob/master/stable/democratic-csi/values.yaml#L208

EDIT: scratch that, it appears they changed the default...maybe it's different in SCALE though?

Venthe commented 2 years ago

@travisghansen It's a full baremetal kubernetes https://github.com/Venthe/Personal-Development-Pipeline/tree/develop/provisioning/cluster_vagrant - my toy cluster where every part is lovingly joined and often broken :) So you can actually track 95% of configuration through my ansible & bash scripts. The only major change is realted to NodeSwap feature gate being turned on, as TrueNAS uses swap.

But yes, I'm talking truenas SCALE (TrueNAS-22.02-RC.1-2, TrueNAS-SCALE-Angelfish-RC). Just to make a test:

# helm uninstall CSI, ETCD, GERRIT, deleted NS. Removed every single PV from the system where I found it.
root@truenas:/mnt/main/Backup/Repositories/_github/Personal-Development-Pipeline/provisioning/cluster_vagrant# find / 2>/dev/null | grep -E '/var/lib/kubelet.*pvc.*'
root@truenas:/mnt/main/Backup/Repositories/_github/Personal-Development-Pipeline/provisioning/cluster_vagrant# mount | grep pvc
➜  cluster_vagrant git:(develop) ✗ (cd core/1.7_csi-democratic-csi-zfs-local && ./bootstrap.sh ) && (cd provisioning/1.9b_externaldns-coredns/manual/ && ./external-dns.sh)
# NO (relevant) ERRORS, skipped for brevity
# Error was on my part, no docker meant no certs... :) so I've redeployed certs manually
➜  cluster_vagrant git:(develop) mount | grep pvc
main/k8s/zfs-local/volumes/pvc-f286f25b-8e97-4375-86e8-b912c36eba09 on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f286f25b-8e97-4375-86e8-b912c36eba09/globalmount type zfs (rw,relatime,xattr,posixacl)
main/k8s/zfs-local/volumes/pvc-f286f25b-8e97-4375-86e8-b912c36eba09 on /var/lib/kubelet/pods/4336fb5b-0090-46c5-9b48-1d76cadece15/volumes/kubernetes.io~csi/pvc-f286f25b-8e97-4375-86e8-b912c36eba09/mount type zfs (rw,relatime,xattr,posixacl)
➜  cluster_vagrant git:(develop) sudo find / 2>/dev/null | grep -E '/var/lib/kubelet.*pvc.*'
/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f286f25b-8e97-4375-86e8-b912c36eba09
/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f286f25b-8e97-4375-86e8-b912c36eba09/vol_data.json
/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f286f25b-8e97-4375-86e8-b912c36eba09/globalmount
/var/lib/kubelet/pods/4336fb5b-0090-46c5-9b48-1d76cadece15/volumes/kubernetes.io~csi/pvc-f286f25b-8e97-4375-86e8-b912c36eba09
/var/lib/kubelet/pods/4336fb5b-0090-46c5-9b48-1d76cadece15/volumes/kubernetes.io~csi/pvc-f286f25b-8e97-4375-86e8-b912c36eba09/mount
/var/lib/kubelet/pods/4336fb5b-0090-46c5-9b48-1d76cadece15/volumes/kubernetes.io~csi/pvc-f286f25b-8e97-4375-86e8-b912c36eba09/vol_data.json
➜  ~ ls /mnt/main/k8s/zfs-local/volumes
➜  cluster_vagrant kubectl exec -nexternal-dns etcd-0 -- ls /bitnami/etcd
data

And it's now working... for some reason? I can't rule out an user error on my part, but I am pretty sure I did everything right & the same way as now.

Edit: I can access RWO, but RWX seems still broken:

➜  cluster_vagrant ./provision_truenas.sh ./provisioning/2.1_ldap/ansible.yml
# Ommited
➜  cluster_vagrant ./provision_truenas.sh ./provisioning/2.2_gerrit/ansible.yml
# Ommited
➜  .kube kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                             STORAGECLASS        REASON   AGE
pvc-9f31f603-93ab-4be7-88e1-10f3d0a8c8e1   5Gi        RWX            Delete           Bound    gerrit/gerrit-git-repositories-pvc                zfs-local-dataset            3m42s
pvc-d37fcca0-ccb2-4e4a-9ddb-12d52ab39ca8   10Gi       RWO            Delete           Bound    gerrit/gerrit-site-gerrit-gerrit-stateful-set-0   zfs-local-dataset            3m42s
pvc-f286f25b-8e97-4375-86e8-b912c36eba09   8Gi        RWO            Delete           Bound    external-dns/data-etcd-0                          zfs-local-dataset            32m
➜  .kube kubectl get pods -ngerrit -owide
NAME                           READY   STATUS       RESTARTS       AGE    IP           NODE      NOMINATED NODE   READINESS GATES
gerrit-gerrit-stateful-set-0   0/1     Init:Error   5 (103s ago)   4m8s   10.0.85.32   truenas   <none>           <none>
➜  .kube kubectl logs -ngerrit pod/gerrit-gerrit-stateful-set-0 --follow -cgerrit-init
#...
[2022-02-09 16:31:37,830] [main] ERROR com.google.gerrit.server.index.account.AllAccountsIndexer : Error collecting accounts
org.eclipse.jgit.errors.RepositoryNotFoundException: repository not found: Cannot open repository All-Users
#...
➜  ~ sudo find / 2>/dev/null | grep -E '/var/lib/kubelet.*pvc.*' | grep pvc-9f31f603-93ab-4be7-88e1-10f3d0a8c8e1
/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9f31f603-93ab-4be7-88e1-10f3d0a8c8e1
/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9f31f603-93ab-4be7-88e1-10f3d0a8c8e1/globalmount
/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9f31f603-93ab-4be7-88e1-10f3d0a8c8e1/vol_data.json
^[[A/var/lib/kubelet/pods/7bc8db86-7b2a-499a-adcc-bae0aaed4a42/volumes/kubernetes.io~csi/pvc-9f31f603-93ab-4be7-88e1-10f3d0a8c8e1
/var/lib/kubelet/pods/7bc8db86-7b2a-499a-adcc-bae0aaed4a42/volumes/kubernetes.io~csi/pvc-9f31f603-93ab-4be7-88e1-10f3d0a8c8e1/vol_data.json
/var/lib/kubelet/pods/7bc8db86-7b2a-499a-adcc-bae0aaed4a42/volumes/kubernetes.io~csi/pvc-9f31f603-93ab-4be7-88e1-10f3d0a8c8e1/mount
➜  ~ sudo find / 2>/dev/null | grep -E '/var/lib/kubelet.*pvc.*' | grep pvc-d37fcca0-ccb2-4e4a-9ddb-12d52ab39ca8
# Many, many more files. Removed for brevity
/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d37fcca0-ccb2-4e4a-9ddb-12d52ab39ca8/globalmount/plugins/plugin-manager.jar
/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d37fcca0-ccb2-4e4a-9ddb-12d52ab39ca8/globalmount/plugins/webhooks.jar
/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d37fcca0-ccb2-4e4a-9ddb-12d52ab39ca8/globalmount/etc/replication.config
➜  ~

As you can see, I data for RWO is populated, RWX is not.

Logs from CSI: kubectl get pods -ndemocratic-csi -o jsonpath="{.items[*].spec.containers[*].name}" | sed 's/ /\n/g' | xargs -I{} kubectl logs -ndemocratic-csi --container={} pod/zfs-local-dataset-democratic-csi-node-jvqdm > logs.log

logs.log

Edit2: And mount:

➜  ~ mount | grep pvc
main/k8s/zfs-local/volumes/pvc-f286f25b-8e97-4375-86e8-b912c36eba09 on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f286f25b-8e97-4375-86e8-b912c36eba09/globalmount type zfs (rw,relatime,xattr,posixacl)
main/k8s/zfs-local/volumes/pvc-f286f25b-8e97-4375-86e8-b912c36eba09 on /var/lib/kubelet/pods/52c7ac55-cfc1-4c84-a606-0c7d62ea424e/volumes/kubernetes.io~csi/pvc-f286f25b-8e97-4375-86e8-b912c36eba09/mount type zfs (rw,relatime,xattr,posixacl)
main/k8s/zfs-local/volumes/pvc-d37fcca0-ccb2-4e4a-9ddb-12d52ab39ca8 on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d37fcca0-ccb2-4e4a-9ddb-12d52ab39ca8/globalmount type zfs (rw,relatime,xattr,posixacl)
main/k8s/zfs-local/volumes/pvc-9f31f603-93ab-4be7-88e1-10f3d0a8c8e1 on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9f31f603-93ab-4be7-88e1-10f3d0a8c8e1/globalmount type zfs (rw,relatime,xattr,posixacl)
main/k8s/zfs-local/volumes/pvc-d37fcca0-ccb2-4e4a-9ddb-12d52ab39ca8 on /var/lib/kubelet/pods/7bc8db86-7b2a-499a-adcc-bae0aaed4a42/volumes/kubernetes.io~csi/pvc-d37fcca0-ccb2-4e4a-9ddb-12d52ab39ca8/mount type zfs (rw,relatime,xattr,posixacl)
main/k8s/zfs-local/volumes/pvc-9f31f603-93ab-4be7-88e1-10f3d0a8c8e1 on /var/lib/kubelet/pods/7bc8db86-7b2a-499a-adcc-bae0aaed4a42/volumes/kubernetes.io~csi/pvc-9f31f603-93ab-4be7-88e1-10f3d0a8c8e1/mount type zfs (rw,relatime,xattr,posixacl)
travisghansen commented 2 years ago

Ok but is the mount there on the node? RWX is handled slightly differently by kubelet so it could be related to fs permissions in the container itself. Can you send the logs for the container failing to start?

Venthe commented 2 years ago

Ok but is the mount there on the node?

I am unsure what do you mean. Both k8s and zfs is on the same node.

Can you send the logs for the container failing to start?

Container is starting, but there is no data - so the container fails due to it - as in there is no possibility to ADD the data to the RWX Sorry, I think I am reaching the limit of my knowledge, can you be a little bit more descriptive?

travisghansen commented 2 years ago

Based on the output you sent the 8e1 pvc is rwx. It appears to be bound and also mounted on the node (mount output).

What data are you expecting to be in there? It appears to me there is an init error on the pod in question? If so the pod is starting (or as close to starting as you can get) but failing during the init. My guess is it’s failing during the init because of permission errors on the mount (not being able to write or whatever) but it seems likely the mount is there in the container etc.

travisghansen commented 2 years ago

https://github.com/democratic-csi/charts/blob/master/stable/democratic-csi/values.yaml#L40

Set the value of that to File and try again. I think it may solve your issue.

Venthe commented 2 years ago

Will do. In the meantime, I've shell'd into the container, and I can confirm:

/var/mnt/git $ touch file
touch: file: Permission denied
travisghansen commented 2 years ago

Yeah there ya go. It's not really an issue with the csi driver...it's 'just' a file permissions error. For example if you ran the pod/container as root I don't think you would have any issues. Anyway try out that change and I'm guessing things will work for you.

Venthe commented 2 years ago

It seems to be working, thank you very much. As of now gerrit has successfully initialized.

I'll get back to the testing & provide more feedback if needed

e: And sorry for the trouble.

travisghansen commented 2 years ago

I’ll change the example yaml files to be that by default.

travisghansen commented 2 years ago

Any further feedback on this? I've updated the example helm values files and also update the code to accept RWX by default.

Venthe commented 2 years ago

At this point everything seems to be working. While it would be nice to resize the volume, I don't believe it's possible?

Either way, no bugs, no problems, CSI seems stable

On Mon, 21 Feb 2022, 20:50 Travis Glenn Hansen, @.***> wrote:

Any further feedback on this? I've updated the example helm values files and also update the code to accept RWX by default.

— Reply to this email directly, view it on GitHub https://github.com/democratic-csi/democratic-csi/issues/148#issuecomment-1047184111, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAM32WXFKVFJLBO6PXR3GNLU4KJOTANCNFSM5NEJMSTA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you commented.Message ID: @.***>

travisghansen commented 2 years ago

Resize is dependent on the k8s bits to come together, when it does then it will ‘just work’ but that bit is out of my hands atm.

On the other hand, snapshots are supposed to work currently but I didn’t test it out.

travisghansen commented 2 years ago

https://github.com/kubernetes-csi/external-resizer/issues/142

travisghansen commented 2 years ago

I've just updated the chart examples and made minor adjustments to support snapshots in the deployment. Use chart 0.10.1 and be sure to enable the snapshotter per the sample yaml file. You also must deploy the snapshot controller (cluster-wide bit) with this argument: https://github.com/democratic-csi/charts/blob/master/stable/snapshot-controller/values.yaml#L25

travisghansen commented 2 years ago

Released in v1.5.1, use chart version 0.10.1.

travisghansen commented 2 years ago

I've submitted a PR here for resizing support: https://github.com/kubernetes-csi/external-resizer/pull/195