Closed tjmchenry closed 2 years ago
Thanks for the ticket! I'm transferring the ownership of the ticket to @blaisedias who knows CSI plugin internals very well. Blaise, your opinion would be appreciated 🙇
The error is being returned because for some reason the parent directory is not found.
The check and comment in the code is based on the CSI spec at https://github.com/container-storage-interface/spec/blob/master/spec.md#nodepublishvolume
The description for the target_path
parameter states, and I quote
// The CO SHALL ensure that the parent directory of this path exists
// and that the process serving the request has `read` and `write`
// permissions to that parent directory.
I think it would be incorrect for the CSI node plugin to create the parent path if it does not exist.
Is that 'parent directory' referring to kubernetes.io~csi? or to pvc-\<id>?
.../var/lib/kubelet/pods/ba5605a9-a70a-41c1-a50c-dde6deac128b/volumes/kubernetes.io~csi/pvc-821ccd22-e4a4-40e8-9f91-02e975eda11b/mount
// For volumes with an access type of block, the SP SHALL place the
// block device at target_path.
// For volumes with an access type of mount, the SP SHALL place the
// mounted directory at target_path.
Since the target path is .../pvc-<id>/mount
does it need to check one higher for the parent?
It seems kubelet used to create the target path improperly, and that was changed in 1.20 which I am running.
Thanks, I will investigate further.
I am also seeing this same error on a fresh install of Mayastor on a new microk8s cluster with bound PV/PVCs.
Here's the error...
MountVolume.SetUp failed for volume "pvc-f27893b7-5713-4f99-bc01-1264c6effcf5" : rpc error: code = Internal desc = Failed to find parent dir for mountpoint /var/snap/microk8s/common/var/lib/kubelet/pods/40dfe415-7d1b-4499-84ef-6fb7155aea0c/volumes/kubernetes.io~csi/pvc-f27893b7-5713-4f99-bc01-1264c6effcf5/mount, volume f27893b7-5713-4f99-bc01-1264c6effcf5
And a snippet of the dir...
I have run a bunch of tests using a fairly recent mayastor build on kubernetes versions 1.20.5 and 1.21 and they have all passed. At the moment the primary focus of the work we are doing on Mayastor is kubernetes. Given that, it may some time from now, but we will investigate this issue on microk8s.
@TheNakedZealot have you hazed at doc/mircok8s.md? It used to work -- but development of K8s and microk8s is high paced so not sure if it helps.
I have run a bunch of tests using a fairly recent mayastor build on kubernetes versions 1.20.5 and 1.21 and they have all passed.
@blaisedias Is this the documentation you used for installation, by chance? (https://github.com/openebs/Mayastor/blob/develop/doc/microk8s.md)
@gila, can you comment on if there are any differences between what was linked in your readme that currently 404s (https://github.com/openebs/Mayastor/blob/develop/doc/quick.md) and the current gitbook documentation (https://mayastor.gitbook.io/introduction/)?
Thanks for the help!
@TheNakedZealot I haven't run using microk8s, but earlier in the discussion in this issue, https://github.com/kubernetes/kubernetes/issues/75535 was mentioned, and that is what I was checking for on kubernetes.
Hi @TheNakedZealot I tried out mayastor 0.8.0 on microk8s and was able to get fio running successfully. Below are the details of my setup.
host:
$ uname -a
Linux mayadata 5.8.0-29-generic #31~20.04.1-Ubuntu SMP Fri Nov 6 16:10:42 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
microk8s:
$ snap info microk8s
name: microk8s
summary: Lightweight Kubernetes for workstations and appliances
publisher: Canonical*
store-url: https://snapcraft.io/microk8s
contact: https://github.com/ubuntu/microk8s
license: unset
description: |
MicroK8s is the smallest, simplest, pure production Kubernetes for clusters, laptops, IoT and
Edge, on Intel and ARM. One command installs a single-node K8s cluster with carefully selected
add-ons on Linux, Windows and macOS. MicroK8s requires no configuration, supports automatic
updates and GPU acceleration. Use it for offline development, prototyping, testing, to build your
CI/CD pipeline or your IoT apps.
commands:
- microk8s.add-node
- microk8s.cilium
- microk8s.config
- microk8s.ctr
- microk8s.dashboard-proxy
- microk8s.dbctl
- microk8s.disable
- microk8s.enable
- microk8s.helm
- microk8s.helm3
- microk8s.inspect
- microk8s.istioctl
- microk8s.join
- microk8s.juju
- microk8s.kubectl
- microk8s.leave
- microk8s.linkerd
- microk8s
- microk8s.refresh-certs
- microk8s.remove-node
- microk8s.reset
- microk8s.start
- microk8s.status
- microk8s.stop
services:
microk8s.daemon-apiserver: simple, enabled, active
microk8s.daemon-apiserver-kicker: simple, enabled, active
microk8s.daemon-cluster-agent: simple, enabled, active
microk8s.daemon-containerd: simple, enabled, active
microk8s.daemon-control-plane-kicker: simple, enabled, active
microk8s.daemon-controller-manager: simple, enabled, active
microk8s.daemon-etcd: simple, enabled, inactive
microk8s.daemon-flanneld: simple, enabled, inactive
microk8s.daemon-kubelet: simple, enabled, active
microk8s.daemon-proxy: simple, enabled, active
microk8s.daemon-scheduler: simple, enabled, active
snap-id: EaXqgt1lyCaxKaQCU349mlodBkDCXRcg
tracking: 1.20/stable
refresh-date: today at 12:27 IST
channels:
1.20/stable: v1.20.5 2021-04-02 (2094) 218MB classic
1.20/candidate: v1.20.5 2021-03-27 (2094) 218MB classic
1.20/beta: v1.20.5 2021-03-27 (2094) 218MB classic
1.20/edge: v1.20.5 2021-03-22 (2094) 218MB classic
latest/stable: v1.20.5 2021-04-05 (2094) 218MB classic
latest/candidate: v1.21.0 2021-04-11 (2126) 189MB classic
latest/beta: v1.21.0 2021-04-11 (2126) 189MB classic
latest/edge: v1.21.0 2021-04-13 (2136) 189MB classic
dqlite/stable: --
dqlite/candidate: --
dqlite/beta: --
dqlite/edge: v1.16.2 2019-11-07 (1038) 189MB classic
1.21/stable: v1.21.0 2021-04-12 (2128) 189MB classic
1.21/candidate: v1.21.0 2021-04-11 (2128) 189MB classic
1.21/beta: v1.21.0 2021-04-11 (2128) 189MB classic
1.21/edge: v1.21.0 2021-04-13 (2135) 189MB classic
1.19/stable: v1.19.9 2021-04-02 (2095) 216MB classic
1.19/candidate: v1.19.9 2021-03-26 (2095) 216MB classic
1.19/beta: v1.19.9 2021-03-26 (2095) 216MB classic
1.19/edge: v1.19.9 2021-04-13 (2134) 216MB classic
1.18/stable: v1.18.17 2021-04-09 (2102) 198MB classic
1.18/candidate: v1.18.17 2021-04-02 (2102) 198MB classic
1.18/beta: v1.18.17 2021-04-02 (2102) 198MB classic
1.18/edge: v1.18.17 2021-04-01 (2102) 198MB classic
1.17/stable: v1.17.17 2021-01-15 (1916) 177MB classic
1.17/candidate: v1.17.17 2021-01-14 (1916) 177MB classic
1.17/beta: v1.17.17 2021-01-14 (1916) 177MB classic
1.17/edge: v1.17.17 2021-01-13 (1916) 177MB classic
1.16/stable: v1.16.15 2020-09-12 (1671) 179MB classic
1.16/candidate: v1.16.15 2020-09-04 (1671) 179MB classic
1.16/beta: v1.16.15 2020-09-04 (1671) 179MB classic
1.16/edge: v1.16.15 2020-09-02 (1671) 179MB classic
1.15/stable: v1.15.11 2020-03-27 (1301) 171MB classic
1.15/candidate: v1.15.11 2020-03-27 (1301) 171MB classic
1.15/beta: v1.15.11 2020-03-27 (1301) 171MB classic
1.15/edge: v1.15.11 2020-03-26 (1301) 171MB classic
1.14/stable: v1.14.10 2020-01-06 (1120) 217MB classic
1.14/candidate: ^
1.14/beta: ^
1.14/edge: v1.14.10 2020-03-26 (1303) 217MB classic
1.13/stable: v1.13.6 2019-06-06 (581) 237MB classic
1.13/candidate: ^
1.13/beta: ^
1.13/edge: ^
1.12/stable: v1.12.9 2019-06-06 (612) 259MB classic
1.12/candidate: ^
1.12/beta: ^
1.12/edge: ^
1.11/stable: v1.11.10 2019-05-10 (557) 258MB classic
1.11/candidate: ^
1.11/beta: ^
1.11/edge: ^
1.10/stable: v1.10.13 2019-04-22 (546) 222MB classic
1.10/candidate: ^
1.10/beta: ^
1.10/edge: ^
installed: v1.20.5 (2094) 218MB classic
Before installing mayastor enabled dns on the cluster:
$ microk8s.enable dns
After installing mayastor updated the kubelet path in the csi daemonset to /var/snap/microk8s/common/var/lib/kubelet/
msp:
apiVersion: "openebs.io/v1alpha1"
kind: MayastorPool
metadata:
name: microk8s-pool
namespace: mayastor
spec:
node: mayadata
disks: ["/dev/sdb"]
storageclass:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: mayastor-nvmf
parameters:
# Set the number of data replicas ("replication factor")
repl: '1'
# Set the export transport protocol
protocol: 'nvmf'
provisioner: io.openebs.csi-mayastor
After provisioning pvc and creating the pod:
$ microk8s.kubectl get msn,msp,msv -n mayastor
NAME STATE AGE
mayastornode.openebs.io/mayadata online 5h20m
NAME NODE STATE AGE
mayastorpool.openebs.io/microk8s-pool mayadata online 41m
NAME TARGETS SIZE STATE AGE
mayastorvolume.openebs.io/ae0c3855-4008-4a6b-abb4-8d7162b21ff7 ["mayadata"] 1073741824 healthy 41m
$ microk8s.kubectl get pvc,pv,po
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/ms-volume-claim Bound pvc-ae0c3855-4008-4a6b-abb4-8d7162b21ff7 1Gi RWO mayastor-nvmf 113s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-ae0c3855-4008-4a6b-abb4-8d7162b21ff7 1Gi RWO Delete Bound default/ms-volume-claim mayastor-nvmf 112s
NAME READY STATUS RESTARTS AGE
pod/fio 1/1 Running 0 40s
mount info:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 99.2M 1 loop /snap/core/10958
loop1 7:1 0 99.2M 1 loop /snap/core/10908
loop2 7:2 0 55.5M 1 loop /snap/core18/1988
loop3 7:3 0 55.5M 1 loop /snap/core18/1997
loop4 7:4 0 217.9M 1 loop /snap/gnome-3-34-1804/60
loop5 7:5 0 219M 1 loop /snap/gnome-3-34-1804/66
loop6 7:6 0 2.2M 1 loop /snap/gnome-system-monitor/148
loop7 7:7 0 2.2M 1 loop /snap/gnome-system-monitor/157
loop8 7:8 0 64.4M 1 loop /snap/gtk-common-themes/1513
loop9 7:9 0 64.8M 1 loop /snap/gtk-common-themes/1514
loop10 7:10 0 208.2M 1 loop /snap/microk8s/2094
loop11 7:11 0 83M 1 loop /snap/scrcpy/269
loop12 7:12 0 83M 1 loop /snap/scrcpy/274
loop13 7:13 0 51M 1 loop /snap/snap-store/518
sda 8:0 0 931.5G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
└─sda2 8:2 0 931G 0 part /
sdb 8:16 1 14.3G 0 disk
nvme0n1 259:1 0 1019M 0 disk /var/snap/microk8s/common/var/lib/kubelet/pods/379ef641-5266-4c4b-8254-b814c3811341/volumes/kubernetes.io~cs
$ cat /proc/mounts | grep nvme
/dev/nvme0n1 /var/snap/microk8s/common/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ae0c3855-4008-4a6b-abb4-8d7162b21ff7/globalmount xfs rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0
/dev/nvme0n1 /var/snap/microk8s/common/var/lib/kubelet/pods/379ef641-5266-4c4b-8254-b814c3811341/volumes/kubernetes.io~csi/pvc-ae0c3855-4008-4a6b-abb4-8d7162b21ff7/mount xfs rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0
Hi,
I spent some time trying to find the source for this problem, hopefully it's useful information and not my misunderstanding.
Describe the bug The mountpoint for the target_path isn't being created resulting in the error from: csi/src/node.rs:232
kubectl describe pod fio
To Reproduce Install microk8s Follow the tutorial from https://mayastor.gitbook.io/introduction/quickstart/scope Modify relevant yaml to point /var/lib/kubelet/ to /var/snap/microk8s/common/var/lib/kubelet/ Create a file-backed disk for pools on 4 nodes. Configure for nvmf Finish tutorial. Create the test fio pod.
Expected behavior The target_path directory should be created
Additional context The target_path directory seems to get created in publish_fs_volume at csi/src/filesystem_vol.rs:240 but if i'm reading this correctly (and I might not be) the error to make sure the directory has been created takes place in csi/src/node.rs:node_publish_volume before publish_fs_volume is called.
In contrast the staging (which finishes correctly for me) csi/src/node.rs:node_stage_volume does not check if the staging_path parent exists or is a directory before (or after seemingly) csi/src/filesystem_vol.rs:stage_fs_volume
debug!("Creating directory {}", target_path); never appears in the logs