openebs / mayastor

Dynamically provision Stateful Persistent Replicated Cluster-wide Fabric Volumes & Filesystems for Kubernetes that is provisioned from an optimized NVME SPDK backend data storage stack.
Apache License 2.0
744 stars 107 forks source link

Parent directory path not created for target_path mountpoint #781

Closed tjmchenry closed 2 years ago

tjmchenry commented 3 years ago

Hi,

I spent some time trying to find the source for this problem, hopefully it's useful information and not my misunderstanding.

Describe the bug The mountpoint for the target_path isn't being created resulting in the error from: csi/src/node.rs:232

// The CO must ensure that the parent of target path exists,
        // make sure that it exists.
        let target_parent = Path::new(&msg.target_path).parent().unwrap();
        if !target_parent.exists() || !target_parent.is_dir() {
            return Err(Status::new(
                Code::Internal,
                format!(
                    "Failed to find parent dir for mountpoint {}, volume {}",
                    &msg.target_path, &msg.volume_id
                ),
            ));
        }

kubectl describe pod fio

Events:
  Type     Reason                  Age              From                     Message
  ----     ------                  ----             ----                     -------
  Normal   Scheduled               15s              default-scheduler        Successfully assigned default/fio to ocdash2
  Normal   SuccessfulAttachVolume  14s              attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-d25a42e0-20a6-4198-9ba3-49060dd6f3fe"
  Warning  FailedMount             2s (x4 over 6s)  kubelet                  MountVolume.SetUp failed for volume "pvc-d25a42e0-20a6-4198-9ba3-49060dd6f3fe" : rpc error: code = Internal desc = Failed to find parent dir for mountpoint /var/snap/microk8s/common/var/lib/kubelet/pods/34b54fe7-dda9-4e72-a441-61b0daaa156f/volumes/kubernetes.io~csi/pvc-d25a42e0-20a6-4198-9ba3-49060dd6f3fe/mount, volume d25a42e0-20a6-4198-9ba3-49060dd6f3fe

To Reproduce Install microk8s Follow the tutorial from https://mayastor.gitbook.io/introduction/quickstart/scope Modify relevant yaml to point /var/lib/kubelet/ to /var/snap/microk8s/common/var/lib/kubelet/ Create a file-backed disk for pools on 4 nodes. Configure for nvmf Finish tutorial. Create the test fio pod.

Expected behavior The target_path directory should be created

Additional context The target_path directory seems to get created in publish_fs_volume at csi/src/filesystem_vol.rs:240 but if i'm reading this correctly (and I might not be) the error to make sure the directory has been created takes place in csi/src/node.rs:node_publish_volume before publish_fs_volume is called.

In contrast the staging (which finishes correctly for me) csi/src/node.rs:node_stage_volume does not check if the staging_path parent exists or is a directory before (or after seemingly) csi/src/filesystem_vol.rs:stage_fs_volume

[2021-03-10T11:54:23Z DEBUG mayastor_csi::node] NodeGetCapabilities request: [StageUnstageVolume] 
[2021-03-10T11:54:23Z TRACE mayastor_csi::node] node_stage_volume NodeStageVolumeRequest { volume_id: "821ccd22-e4a4-40e8-9f91-02e975eda11b", publish_context: {"uri": "nvmf://10.211.150.219:8420/nqn.2019-05.io.openebs:nexus-821ccd22-e4a4-40e8-9f91-02e975eda11b"}, staging_target_path: "/var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-821ccd22-e4a4-40e8-9f91-02e975eda11b/globalmount", volume_capability: Some(VolumeCapability { access_mode: Some(AccessMode { mode: SingleNodeWriter }), access_type: Some(Mount(MountVolume { fs_type: "ext4", mount_flags: [] })) }), secrets: {}, volume_context: {"storage.kubernetes.io/csiProvisionerIdentity": "1615258768003-8081-io.openebs.csi-mayastor", "protocol": "nvmf", "repl": "3"} } 
[2021-03-10T11:54:23Z DEBUG mayastor_csi::node] Volume 821ccd22-e4a4-40e8-9f91-02e975eda11b has URI nvmf://10.211.150.219:8420/nqn.2019-05.io.openebs:nexus-821ccd22-e4a4-40e8-9f91-02e975eda11b 
[2021-03-10T11:54:23Z DEBUG mayastor_csi::node] Attaching volume 821ccd22-e4a4-40e8-9f91-02e975eda11b 
[2021-03-10T11:54:24Z DEBUG mayastor_csi::filesystem_vol] Staging volume 821ccd22-e4a4-40e8-9f91-02e975eda11b to /var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-821ccd22-e4a4-40e8-9f91-02e975eda11b/globalmount 
[2021-03-10T11:54:24Z DEBUG mayastor_csi::format] Probing device /dev/nvme1n1 
[2021-03-10T11:54:24Z DEBUG mayastor_csi::format] Creating new filesystem (ext4) on device /dev/nvme1n1 
[2021-03-10T11:54:24Z TRACE mayastor_csi::format] Output from mkfs.ext4 command: Discarding device blocks:   4096/523003             done                            

    Creating filesystem with 523003 4k blocks and 130816 inodes

    Filesystem UUID: a4049e00-7039-486d-ad4e-ff23aec0f4b9

    Superblock backups stored on blocks: 

        32768, 98304, 163840, 229376, 294912

    Allocating group tables:  0/16     done                            

    Writing inode tables:  0/16     done                            

    Creating journal (8192 blocks): done

    Writing superblocks and filesystem accounting information:  0/16     done

[2021-03-10T11:54:24Z DEBUG mayastor_csi::filesystem_vol] Mounting device /dev/nvme1n1 onto /var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-821ccd22-e4a4-40e8-9f91-02e975eda11b/globalmount 
[2021-03-10T11:54:24Z DEBUG mayastor_csi::mount] Filesystem (ext4) on device /dev/nvme1n1 mounted onto target /var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-821ccd22-e4a4-40e8-9f91-02e975eda11b/globalmount (options: none) 
[2021-03-10T11:54:24Z INFO  mayastor_csi::filesystem_vol] Volume 821ccd22-e4a4-40e8-9f91-02e975eda11b staged to /var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-821ccd22-e4a4-40e8-9f91-02e975eda11b/globalmount 
[2021-03-10T11:54:24Z DEBUG mayastor_csi::node] NodeGetCapabilities request: [StageUnstageVolume] 
[2021-03-10T11:54:24Z TRACE mayastor_csi::node] node_publish_volume NodePublishVolumeRequest { volume_id: "821ccd22-e4a4-40e8-9f91-02e975eda11b", publish_context: {"uri": "nvmf://10.211.150.219:8420/nqn.2019-05.io.openebs:nexus-821ccd22-e4a4-40e8-9f91-02e975eda11b"}, staging_target_path: "/var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-821ccd22-e4a4-40e8-9f91-02e975eda11b/globalmount", target_path: "/var/snap/microk8s/common/var/lib/kubelet/pods/ba5605a9-a70a-41c1-a50c-dde6deac128b/volumes/kubernetes.io~csi/pvc-821ccd22-e4a4-40e8-9f91-02e975eda11b/mount", volume_capability: Some(VolumeCapability { access_mode: Some(AccessMode { mode: SingleNodeWriter }), access_type: Some(Mount(MountVolume { fs_type: "ext4", mount_flags: [] })) }), readonly: false, secrets: {}, volume_context: {"protocol": "nvmf", "storage.kubernetes.io/csiProvisionerIdentity": "1615258768003-8081-io.openebs.csi-mayastor", "repl": "3"} } 
[2021-03-10T11:54:25Z DEBUG mayastor_csi::node] NodeGetCapabilities request: [StageUnstageVolume] 
[2021-03-10T11:54:25Z TRACE mayastor_csi::node] node_publish_volume NodePublishVolumeRequest { volume_id: "821ccd22-e4a4-40e8-9f91-02e975eda11b", publish_context: {"uri": "nvmf://10.211.150.219:8420/nqn.2019-05.io.openebs:nexus-821ccd22-e4a4-40e8-9f91-02e975eda11b"}, staging_target_path: "/var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-821ccd22-e4a4-40e8-9f91-02e975eda11b/globalmount", target_path: "/var/snap/microk8s/common/var/lib/kubelet/pods/ba5605a9-a70a-41c1-a50c-dde6deac128b/volumes/kubernetes.io~csi/pvc-821ccd22-e4a4-40e8-9f91-02e975eda11b/mount", volume_capability: Some(VolumeCapability { access_mode: Some(AccessMode { mode: SingleNodeWriter }), access_type: Some(Mount(MountVolume { fs_type: "ext4", mount_flags: [] })) }), readonly: false, secrets: {}, volume_context: {"repl": "3", "storage.kubernetes.io/csiProvisionerIdentity": "1615258768003-8081-io.openebs.csi-mayastor", "protocol": "nvmf"} } 
[2021-03-10T11:54:26Z DEBUG mayastor_csi::node] NodeGetCapabilities request: [StageUnstageVolume] 
[2021-03-10T11:54:26Z TRACE mayastor_csi::node] node_publish_volume NodePublishVolumeRequest { volume_id: "821ccd22-e4a4-40e8-9f91-02e975eda11b", publish_context: {"uri": "nvmf://10.211.150.219:8420/nqn.2019-05.io.openebs:nexus-821ccd22-e4a4-40e8-9f91-02e975eda11b"}, staging_target_path: "/var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-821ccd22-e4a4-40e8-9f91-02e975eda11b/globalmount", target_path: "/var/snap/microk8s/common/var/lib/kubelet/pods/ba5605a9-a70a-41c1-a50c-dde6deac128b/volumes/kubernetes.io~csi/pvc-821ccd22-e4a4-40e8-9f91-02e975eda11b/mount", volume_capability: Some(VolumeCapability { access_mode: Some(AccessMode { mode: SingleNodeWriter }), access_type: Some(Mount(MountVolume { fs_type: "ext4", mount_flags: [] })) }), readonly: false, secrets: {}, volume_context: {"protocol": "nvmf", "storage.kubernetes.io/csiProvisionerIdentity": "1615258768003-8081-io.openebs.csi-mayastor", "repl": "3"} } 
[2021-03-10T11:54:28Z DEBUG mayastor_csi::node] NodeGetCapabilities request: [StageUnstageVolume] 
[2021-03-10T11:54:28Z TRACE mayastor_csi::node] node_publish_volume NodePublishVolumeRequest { volume_id: "821ccd22-e4a4-40e8-9f91-02e975eda11b", publish_context: {"uri": "nvmf://10.211.150.219:8420/nqn.2019-05.io.openebs:nexus-821ccd22-e4a4-40e8-9f91-02e975eda11b"}, staging_target_path: "/var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-821ccd22-e4a4-40e8-9f91-02e975eda11b/globalmount", target_path: "/var/snap/microk8s/common/var/lib/kubelet/pods/ba5605a9-a70a-41c1-a50c-dde6deac128b/volumes/kubernetes.io~csi/pvc-821ccd22-e4a4-40e8-9f91-02e975eda11b/mount", volume_capability: Some(VolumeCapability { access_mode: Some(AccessMode { mode: SingleNodeWriter }), access_type: Some(Mount(MountVolume { fs_type: "ext4", mount_flags: [] })) }), readonly: false, secrets: {}, volume_context: {"storage.kubernetes.io/csiProvisionerIdentity": "1615258768003-8081-io.openebs.csi-mayastor", "protocol": "nvmf", "repl": "3"} } 
[2021-03-10T11:54:32Z DEBUG mayastor_csi::node] NodeGetCapabilities request: [StageUnstageVolume]

debug!("Creating directory {}", target_path); never appears in the logs

jkryl commented 3 years ago

Thanks for the ticket! I'm transferring the ownership of the ticket to @blaisedias who knows CSI plugin internals very well. Blaise, your opinion would be appreciated 🙇

blaisedias commented 3 years ago

The error is being returned because for some reason the parent directory is not found.

The check and comment in the code is based on the CSI spec at https://github.com/container-storage-interface/spec/blob/master/spec.md#nodepublishvolume The description for the target_path parameter states, and I quote

  // The CO SHALL ensure that the parent directory of this path exists
  // and that the process serving the request has `read` and `write`
  // permissions to that parent directory.

I think it would be incorrect for the CSI node plugin to create the parent path if it does not exist.

tjmchenry commented 3 years ago

Is that 'parent directory' referring to kubernetes.io~csi? or to pvc-\<id>? .../var/lib/kubelet/pods/ba5605a9-a70a-41c1-a50c-dde6deac128b/volumes/kubernetes.io~csi/pvc-821ccd22-e4a4-40e8-9f91-02e975eda11b/mount

// For volumes with an access type of block, the SP SHALL place the
// block device at target_path.
// For volumes with an access type of mount, the SP SHALL place the
// mounted directory at target_path.

Since the target path is .../pvc-<id>/mount does it need to check one higher for the parent?

It seems kubelet used to create the target path improperly, and that was changed in 1.20 which I am running.

https://github.com/kubernetes/kubernetes/issues/75535

blaisedias commented 3 years ago

Thanks, I will investigate further.

shrinedogg commented 3 years ago

I am also seeing this same error on a fresh install of Mayastor on a new microk8s cluster with bound PV/PVCs.

Here's the error...

MountVolume.SetUp failed for volume "pvc-f27893b7-5713-4f99-bc01-1264c6effcf5" : rpc error: code = Internal desc = Failed to find parent dir for mountpoint /var/snap/microk8s/common/var/lib/kubelet/pods/40dfe415-7d1b-4499-84ef-6fb7155aea0c/volumes/kubernetes.io~csi/pvc-f27893b7-5713-4f99-bc01-1264c6effcf5/mount, volume f27893b7-5713-4f99-bc01-1264c6effcf5

And a snippet of the dir...

image

blaisedias commented 3 years ago

I have run a bunch of tests using a fairly recent mayastor build on kubernetes versions 1.20.5 and 1.21 and they have all passed. At the moment the primary focus of the work we are doing on Mayastor is kubernetes. Given that, it may some time from now, but we will investigate this issue on microk8s.

gila commented 3 years ago

@TheNakedZealot have you hazed at doc/mircok8s.md? It used to work -- but development of K8s and microk8s is high paced so not sure if it helps.

shrinedogg commented 3 years ago

I have run a bunch of tests using a fairly recent mayastor build on kubernetes versions 1.20.5 and 1.21 and they have all passed.

@blaisedias Is this the documentation you used for installation, by chance? (https://github.com/openebs/Mayastor/blob/develop/doc/microk8s.md)

@gila, can you comment on if there are any differences between what was linked in your readme that currently 404s (https://github.com/openebs/Mayastor/blob/develop/doc/quick.md) and the current gitbook documentation (https://mayastor.gitbook.io/introduction/)?

Thanks for the help!

blaisedias commented 3 years ago

@TheNakedZealot I haven't run using microk8s, but earlier in the discussion in this issue, https://github.com/kubernetes/kubernetes/issues/75535 was mentioned, and that is what I was checking for on kubernetes.

shubham14bajpai commented 3 years ago

Hi @TheNakedZealot I tried out mayastor 0.8.0 on microk8s and was able to get fio running successfully. Below are the details of my setup.

host:

$ uname -a
Linux mayadata 5.8.0-29-generic #31~20.04.1-Ubuntu SMP Fri Nov 6 16:10:42 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

microk8s:

$ snap info microk8s
name:      microk8s
summary:   Lightweight Kubernetes for workstations and appliances
publisher: Canonical*
store-url: https://snapcraft.io/microk8s
contact:   https://github.com/ubuntu/microk8s
license:   unset
description: |
  MicroK8s is the smallest, simplest, pure production Kubernetes for clusters, laptops, IoT and
  Edge, on Intel and ARM. One command installs a single-node K8s cluster with carefully selected
  add-ons on Linux, Windows and macOS.  MicroK8s requires no configuration, supports automatic
  updates and GPU acceleration. Use it for offline development, prototyping, testing, to build your
  CI/CD pipeline or your IoT apps.
commands:
  - microk8s.add-node
  - microk8s.cilium
  - microk8s.config
  - microk8s.ctr
  - microk8s.dashboard-proxy
  - microk8s.dbctl
  - microk8s.disable
  - microk8s.enable
  - microk8s.helm
  - microk8s.helm3
  - microk8s.inspect
  - microk8s.istioctl
  - microk8s.join
  - microk8s.juju
  - microk8s.kubectl
  - microk8s.leave
  - microk8s.linkerd
  - microk8s
  - microk8s.refresh-certs
  - microk8s.remove-node
  - microk8s.reset
  - microk8s.start
  - microk8s.status
  - microk8s.stop
services:
  microk8s.daemon-apiserver:            simple, enabled, active
  microk8s.daemon-apiserver-kicker:     simple, enabled, active
  microk8s.daemon-cluster-agent:        simple, enabled, active
  microk8s.daemon-containerd:           simple, enabled, active
  microk8s.daemon-control-plane-kicker: simple, enabled, active
  microk8s.daemon-controller-manager:   simple, enabled, active
  microk8s.daemon-etcd:                 simple, enabled, inactive
  microk8s.daemon-flanneld:             simple, enabled, inactive
  microk8s.daemon-kubelet:              simple, enabled, active
  microk8s.daemon-proxy:                simple, enabled, active
  microk8s.daemon-scheduler:            simple, enabled, active
snap-id:      EaXqgt1lyCaxKaQCU349mlodBkDCXRcg
tracking:     1.20/stable
refresh-date: today at 12:27 IST
channels:
  1.20/stable:      v1.20.5  2021-04-02 (2094) 218MB classic
  1.20/candidate:   v1.20.5  2021-03-27 (2094) 218MB classic
  1.20/beta:        v1.20.5  2021-03-27 (2094) 218MB classic
  1.20/edge:        v1.20.5  2021-03-22 (2094) 218MB classic
  latest/stable:    v1.20.5  2021-04-05 (2094) 218MB classic
  latest/candidate: v1.21.0  2021-04-11 (2126) 189MB classic
  latest/beta:      v1.21.0  2021-04-11 (2126) 189MB classic
  latest/edge:      v1.21.0  2021-04-13 (2136) 189MB classic
  dqlite/stable:    --                               
  dqlite/candidate: --                               
  dqlite/beta:      --                               
  dqlite/edge:      v1.16.2  2019-11-07 (1038) 189MB classic
  1.21/stable:      v1.21.0  2021-04-12 (2128) 189MB classic
  1.21/candidate:   v1.21.0  2021-04-11 (2128) 189MB classic
  1.21/beta:        v1.21.0  2021-04-11 (2128) 189MB classic
  1.21/edge:        v1.21.0  2021-04-13 (2135) 189MB classic
  1.19/stable:      v1.19.9  2021-04-02 (2095) 216MB classic
  1.19/candidate:   v1.19.9  2021-03-26 (2095) 216MB classic
  1.19/beta:        v1.19.9  2021-03-26 (2095) 216MB classic
  1.19/edge:        v1.19.9  2021-04-13 (2134) 216MB classic
  1.18/stable:      v1.18.17 2021-04-09 (2102) 198MB classic
  1.18/candidate:   v1.18.17 2021-04-02 (2102) 198MB classic
  1.18/beta:        v1.18.17 2021-04-02 (2102) 198MB classic
  1.18/edge:        v1.18.17 2021-04-01 (2102) 198MB classic
  1.17/stable:      v1.17.17 2021-01-15 (1916) 177MB classic
  1.17/candidate:   v1.17.17 2021-01-14 (1916) 177MB classic
  1.17/beta:        v1.17.17 2021-01-14 (1916) 177MB classic
  1.17/edge:        v1.17.17 2021-01-13 (1916) 177MB classic
  1.16/stable:      v1.16.15 2020-09-12 (1671) 179MB classic
  1.16/candidate:   v1.16.15 2020-09-04 (1671) 179MB classic
  1.16/beta:        v1.16.15 2020-09-04 (1671) 179MB classic
  1.16/edge:        v1.16.15 2020-09-02 (1671) 179MB classic
  1.15/stable:      v1.15.11 2020-03-27 (1301) 171MB classic
  1.15/candidate:   v1.15.11 2020-03-27 (1301) 171MB classic
  1.15/beta:        v1.15.11 2020-03-27 (1301) 171MB classic
  1.15/edge:        v1.15.11 2020-03-26 (1301) 171MB classic
  1.14/stable:      v1.14.10 2020-01-06 (1120) 217MB classic
  1.14/candidate:   ^                                
  1.14/beta:        ^                                
  1.14/edge:        v1.14.10 2020-03-26 (1303) 217MB classic
  1.13/stable:      v1.13.6  2019-06-06  (581) 237MB classic
  1.13/candidate:   ^                                
  1.13/beta:        ^                                
  1.13/edge:        ^                                
  1.12/stable:      v1.12.9  2019-06-06  (612) 259MB classic
  1.12/candidate:   ^                                
  1.12/beta:        ^                                
  1.12/edge:        ^                                
  1.11/stable:      v1.11.10 2019-05-10  (557) 258MB classic
  1.11/candidate:   ^                                
  1.11/beta:        ^                                
  1.11/edge:        ^                                
  1.10/stable:      v1.10.13 2019-04-22  (546) 222MB classic
  1.10/candidate:   ^                                
  1.10/beta:        ^                                
  1.10/edge:        ^                                
installed:          v1.20.5             (2094) 218MB classic

Before installing mayastor enabled dns on the cluster:

$ microk8s.enable dns

After installing mayastor updated the kubelet path in the csi daemonset to /var/snap/microk8s/common/var/lib/kubelet/

msp:

apiVersion: "openebs.io/v1alpha1"
kind: MayastorPool
metadata:
  name: microk8s-pool
  namespace: mayastor
spec:
  node: mayadata
  disks: ["/dev/sdb"]

storageclass:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: mayastor-nvmf
parameters:
  # Set the number of data replicas ("replication factor")
  repl: '1'
  # Set the export transport protocol
  protocol: 'nvmf'
provisioner: io.openebs.csi-mayastor

After provisioning pvc and creating the pod:

$ microk8s.kubectl get msn,msp,msv -n mayastor
NAME                               STATE    AGE
mayastornode.openebs.io/mayadata   online   5h20m

NAME                                    NODE       STATE    AGE
mayastorpool.openebs.io/microk8s-pool   mayadata   online   41m

NAME                                                             TARGETS        SIZE         STATE     AGE
mayastorvolume.openebs.io/ae0c3855-4008-4a6b-abb4-8d7162b21ff7   ["mayadata"]   1073741824   healthy   41m
$ microk8s.kubectl get pvc,pv,po
NAME                                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
persistentvolumeclaim/ms-volume-claim   Bound    pvc-ae0c3855-4008-4a6b-abb4-8d7162b21ff7   1Gi        RWO            mayastor-nvmf   113s

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                     STORAGECLASS    REASON   AGE
persistentvolume/pvc-ae0c3855-4008-4a6b-abb4-8d7162b21ff7   1Gi        RWO            Delete           Bound    default/ms-volume-claim   mayastor-nvmf            112s

NAME      READY   STATUS    RESTARTS   AGE
pod/fio   1/1     Running   0          40s

mount info:

$ lsblk 
NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0     7:0    0  99.2M  1 loop /snap/core/10958
loop1     7:1    0  99.2M  1 loop /snap/core/10908
loop2     7:2    0  55.5M  1 loop /snap/core18/1988
loop3     7:3    0  55.5M  1 loop /snap/core18/1997
loop4     7:4    0 217.9M  1 loop /snap/gnome-3-34-1804/60
loop5     7:5    0   219M  1 loop /snap/gnome-3-34-1804/66
loop6     7:6    0   2.2M  1 loop /snap/gnome-system-monitor/148
loop7     7:7    0   2.2M  1 loop /snap/gnome-system-monitor/157
loop8     7:8    0  64.4M  1 loop /snap/gtk-common-themes/1513
loop9     7:9    0  64.8M  1 loop /snap/gtk-common-themes/1514
loop10    7:10   0 208.2M  1 loop /snap/microk8s/2094
loop11    7:11   0    83M  1 loop /snap/scrcpy/269
loop12    7:12   0    83M  1 loop /snap/scrcpy/274
loop13    7:13   0    51M  1 loop /snap/snap-store/518
sda       8:0    0 931.5G  0 disk 
├─sda1    8:1    0   512M  0 part /boot/efi
└─sda2    8:2    0   931G  0 part /
sdb       8:16   1  14.3G  0 disk 
nvme0n1 259:1    0  1019M  0 disk /var/snap/microk8s/common/var/lib/kubelet/pods/379ef641-5266-4c4b-8254-b814c3811341/volumes/kubernetes.io~cs
$ cat /proc/mounts | grep nvme
/dev/nvme0n1 /var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ae0c3855-4008-4a6b-abb4-8d7162b21ff7/globalmount xfs rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0
/dev/nvme0n1 /var/snap/microk8s/common/var/lib/kubelet/pods/379ef641-5266-4c4b-8254-b814c3811341/volumes/kubernetes.io~csi/pvc-ae0c3855-4008-4a6b-abb4-8d7162b21ff7/mount xfs rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0