ceph / ceph-csi

CSI driver for Ceph
Apache License 2.0
1.27k stars 539 forks source link

Format of disk fails due missing entry in /etc/fstab #264

Closed wilmardo closed 5 years ago

wilmardo commented 5 years ago

As the title state, the format of disk fails due missing entry in /etc/fstab or at least that seems the issue to me from the logs. I don't know how to proceed further as this seems an issue with the code itself or at least a case which isn't handled since it tries to format an undefined("") disk:

Attempting to determine if disk "" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export ])
I0315 10:29:17.471037    5002 mount_linux.go:518] Output: "error: : No such file or directory\n", err: exit status 2
I0315 10:29:17.471138    5002 mount_linux.go:489] Disk "" appears to be unformatted, attempting to format as type: "ext4" with options: [-F -m0 ]
E0315 10:29:17.474260    5002 mount_linux.go:496] format of disk "" failed: type:("ext4") target:("/var/lib/kubelet/pods/99854799-470c-11e9-8f60-dac245f50f13/volumes/kubernetes.io~csi/pvc-93b49cbf-470c-11e9-8f60-dac245f50f13/mount") options:(["defaults"])error:(exit status 1)

kubectl get nodes --show-labels (please ignore the notReady nodes, cluster is in parts at the moment due missing resources). The cluster is running on CoreOS

NAME    STATUS     ROLES    AGE     VERSION   LABELS
node1   Ready      master   6d23h   v1.13.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node1,node-role.kubernetes.io/master=
node2   Ready      master   6d23h   v1.13.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node2,node-role.kubernetes.io/master=
node3   NotReady   node     6d23h   v1.13.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node3,node-role.kubernetes.io/node=
node4   NotReady   node     6d23h   v1.13.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node4,node-role.kubernetes.io/node=
node5   Ready      node     6d23h   v1.13.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node5,node-role.kubernetes.io/node=

csi-rbdplugin

I0315 10:29:17.424786    5002 utils.go:105] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0315 10:29:17.424826    5002 utils.go:106] GRPC request: {}
I0315 10:29:17.425659    5002 nodeserver-default.go:53] Using default NodeGetCapabilities
I0315 10:29:17.425672    5002 utils.go:111] GRPC response: {"capabilities":[{"Type":{"Rpc":{}}}]}
I0315 10:29:17.429206    5002 utils.go:105] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0315 10:29:17.429231    5002 utils.go:106] GRPC request: {}
I0315 10:29:17.429823    5002 nodeserver-default.go:53] Using default NodeGetCapabilities
I0315 10:29:17.429833    5002 utils.go:111] GRPC response: {"capabilities":[{"Type":{"Rpc":{}}}]}
I0315 10:29:17.440394    5002 utils.go:105] GRPC call: /csi.v1.Node/NodePublishVolume
I0315 10:29:17.440504    5002 utils.go:106] GRPC request: {"secrets":"***stripped***","target_path":"/var/lib/kubelet/pods/99854799-470c-11e9-8f60-dac245f50f13/volumes/kubernetes.io~csi/pvc-93b49cbf-470c-11e9-8f60-dac245f50f13/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"adminid":"admin","fsType":"ext4","imageFeatures":"layering","imageFormat":"2","monitors":"192.168.2.100:6789","mounter":"rbd-nbd","pool":"kubernetes","storage.kubernetes.io/csiProvisionerIdentity":"1552638730277-8081-","userid":"kubernetes"},"volume_id":"csi-rbd-vol-93c1ee26-470c-11e9-845a-0a580ae94410"}
I0315 10:29:17.442264    5002 hashed.go:47] hashedKeyMutex.LockKey(...) called for id "/var/lib/kubelet/pods/99854799-470c-11e9-8f60-dac245f50f13/volumes/kubernetes.io~csi/pvc-93b49cbf-470c-11e9-8f60-dac245f50f13/mount"
I0315 10:29:17.442299    5002 hashed.go:49] hashedKeyMutex.LockKey(...) for id "/var/lib/kubelet/pods/99854799-470c-11e9-8f60-dac245f50f13/volumes/kubernetes.io~csi/pvc-93b49cbf-470c-11e9-8f60-dac245f50f13/mount" completed.
I0315 10:29:17.442357    5002 nsenter.go:130] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/realpath -e /var/lib/kubelet/pods/99854799-470c-11e9-8f60-dac245f50f13/volumes/kubernetes.io~csi/pvc-93b49cbf-470c-11e9-8f60-dac245f50f13/mount]
I0315 10:29:17.444454    5002 nsenter_mount.go:181] nsenter findmnt args: [-o target,fstype --noheadings --first-only --target /var/lib/kubelet/pods/99854799-470c-11e9-8f60-dac245f50f13/volumes/kubernetes.io~csi/pvc-93b49cbf-470c-11e9-8f60-dac245f50f13/mount]
I0315 10:29:17.444583    5002 nsenter.go:130] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/findmnt -o target,fstype --noheadings --first-only --target /var/lib/kubelet/pods/99854799-470c-11e9-8f60-dac245f50f13/volumes/kubernetes.io~csi/pvc-93b49cbf-470c-11e9-8f60-dac245f50f13/mount]
I0315 10:29:17.449657    5002 nsenter_mount.go:195] IsLikelyNotMountPoint findmnt output for path /var/lib/kubelet/pods/99854799-470c-11e9-8f60-dac245f50f13/volumes/kubernetes.io~csi/pvc-93b49cbf-470c-11e9-8f60-dac245f50f13/mount: /     :
I0315 10:29:17.449812    5002 nsenter_mount.go:201] IsLikelyNotMountPoint: /var/lib/kubelet/pods/99854799-470c-11e9-8f60-dac245f50f13/volumes/kubernetes.io~csi/pvc-93b49cbf-470c-11e9-8f60-dac245f50f13/mount is not a mount point
I0315 10:29:17.450936    5002 nsenter.go:130] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/realpath -e /var/lib/kubelet/pods/99854799-470c-11e9-8f60-dac245f50f13/volumes/kubernetes.io~csi/pvc-93b49cbf-470c-11e9-8f60-dac245f50f13/mount]
I0315 10:29:17.454621    5002 rbd_util.go:265] ignoreMultiNodeWritable flag in parse getRBDVolumeOptions is: true
I0315 10:29:17.454691    5002 rbd_attach.go:105] found nbds max parameters file at /sys/module/nbd/parameters/nbds_max
I0315 10:29:17.454806    5002 rbd_attach.go:117] rbd-nbd: max_nbds: 16
I0315 10:29:17.455042    5002 rbd_attach.go:181] rbd-nbd device /sys/block/nbd0 did not match expected image path: kubernetes/pvc-93b49cbf-470c-11e9-8f60-dac245f50f13 with path found: kubernetes/pvc-bb37f12c-46fe-11e9-8f60-dac245f50f13
I0315 10:29:17.455086    5002 nodeserver.go:92] rbd image: csi-rbd-vol-93c1ee26-470c-11e9-845a-0a580ae94410/kubernetes was successfully mapped at
I0315 10:29:17.455108    5002 nodeserver.go:130] target /var/lib/kubelet/pods/99854799-470c-11e9-8f60-dac245f50f13/volumes/kubernetes.io~csi/pvc-93b49cbf-470c-11e9-8f60-dac245f50f13/mount
isBlock false
fstype ext4
device
readonly false
attributes map[mounter:rbd-nbd userid:kubernetes adminid:admin imageFeatures:layering imageFormat:2 pool:kubernetes storage.kubernetes.io/csiProvisionerIdentity:1552638730277-8081- fsType:ext4 monitors:192.168.2.100:6789]
 mountflags []
I0315 10:29:17.455145    5002 mount_linux.go:441] Checking for issues with fsck on disk:
I0315 10:29:17.456882    5002 mount_linux.go:460] Attempting to mount disk: ext4  /var/lib/kubelet/pods/99854799-470c-11e9-8f60-dac245f50f13/volumes/kubernetes.io~csi/pvc-93b49cbf-470c-11e9-8f60-dac245f50f13/mount
I0315 10:29:17.456928    5002 nsenter_mount.go:80] nsenter mount  /var/lib/kubelet/pods/99854799-470c-11e9-8f60-dac245f50f13/volumes/kubernetes.io~csi/pvc-93b49cbf-470c-11e9-8f60-dac245f50f13/mount ext4 [defaults]
I0315 10:29:17.456952    5002 nsenter.go:130] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/systemd-run --description=Kubernetes transient mount for /var/lib/kubelet/pods/99854799-470c-11e9-8f60-dac245f50f13/volumes/kubernetes.io~csi/pvc-93b49cbf-470c-11e9-8f60-dac245f50f13/mount --scope -- /bin/mount -t ext4 -o defaults /var/lib/kubelet/pods/99854799-470c-11e9-8f60-dac245f50f13/volumes/kubernetes.io~csi/pvc-93b49cbf-470c-11e9-8f60-dac245f50f13/mount]
I0315 10:29:17.469146    5002 nsenter_mount.go:84] Output of mounting  to /var/lib/kubelet/pods/99854799-470c-11e9-8f60-dac245f50f13/volumes/kubernetes.io~csi/pvc-93b49cbf-470c-11e9-8f60-dac245f50f13/mount: Running scope as unit: run-ra0724625556548238c1d4efef2b17722.scope
mount: /var/lib/kubelet/pods/99854799-470c-11e9-8f60-dac245f50f13/volumes/kubernetes.io~csi/pvc-93b49cbf-470c-11e9-8f60-dac245f50f13/mount: can't find in /etc/fstab.
I0315 10:29:17.469251    5002 mount_linux.go:515] Attempting to determine if disk "" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export ])
I0315 10:29:17.471037    5002 mount_linux.go:518] Output: "error: : No such file or directory\n", err: exit status 2
I0315 10:29:17.471138    5002 mount_linux.go:489] Disk "" appears to be unformatted, attempting to format as type: "ext4" with options: [-F -m0 ]
E0315 10:29:17.474260    5002 mount_linux.go:496] format of disk "" failed: type:("ext4") target:("/var/lib/kubelet/pods/99854799-470c-11e9-8f60-dac245f50f13/volumes/kubernetes.io~csi/pvc-93b49cbf-470c-11e9-8f60-dac245f50f13/mount") options:(["defaults"])error:(exit status 1)
I0315 10:29:17.474379    5002 hashed.go:54] hashedKeyMutex.UnlockKey(...) called for id "/var/lib/kubelet/pods/99854799-470c-11e9-8f60-dac245f50f13/volumes/kubernetes.io~csi/pvc-93b49cbf-470c-11e9-8f60-dac245f50f13/mount"
I0315 10:29:17.474429    5002 hashed.go:56] hashedKeyMutex.UnlockKey(...) for id "/var/lib/kubelet/pods/99854799-470c-11e9-8f60-dac245f50f13/volumes/kubernetes.io~csi/pvc-93b49cbf-470c-11e9-8f60-dac245f50f13/mount" completed.
E0315 10:29:17.474505    5002 utils.go:109] GRPC error: exit status 1

csi-provsioner

I0315 10:24:55.506380       1 controller.go:926] provision "default/test-ceph-claim" class "csi-rbd": started
I0315 10:24:55.528990       1 controller.go:188] GRPC call: /csi.v1.Identity/GetPluginCapabilities
I0315 10:24:55.530037       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"test-ceph-claim", UID:"93b49cbf-470c-11e9-8f60-dac245f50f13", APIVersion:"v1", ResourceVersion:"715753", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/test-ceph-claim"
I0315 10:24:55.529020       1 controller.go:189] GRPC request: {}
I0315 10:24:55.535146       1 controller.go:191] GRPC response: {"capabilities":[{"Type":{"Service":{"type":1}}}]}
I0315 10:24:55.536509       1 controller.go:192] GRPC error: <nil>
I0315 10:24:55.536532       1 controller.go:188] GRPC call: /csi.v1.Controller/ControllerGetCapabilities
I0315 10:24:55.536541       1 controller.go:189] GRPC request: {}
I0315 10:24:55.548904       1 controller.go:191] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":7}}}]}
I0315 10:24:55.552241       1 controller.go:192] GRPC error: <nil>
I0315 10:24:55.552540       1 controller.go:188] GRPC call: /csi.v1.Identity/GetPluginInfo
I0315 10:24:55.552654       1 controller.go:189] GRPC request: {}
I0315 10:24:55.555212       1 controller.go:191] GRPC response: {"name":"csi-rbdplugin","vendor_version":"1.0.0"}
I0315 10:24:55.556046       1 controller.go:192] GRPC error: <nil>
W0315 10:24:55.556114       1 controller.go:489] "fstype" is deprecated and will be removed in a future release, please use "csi.storage.k8s.io/fstype" instead
I0315 10:24:55.556139       1 controller.go:544] CreateVolumeRequest {Name:pvc-93b49cbf-470c-11e9-8f60-dac245f50f13 CapacityRange:required_bytes:1073741824  VolumeCapabilities:[mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > ] Parameters:map[pool:kubernetes userid:kubernetes csi.storage.k8s.io/node-publish-secret-name:csi-rbd-secret csi.storage.k8s.io/provisioner-secret-namespace:storing fsType:ext4 monitors:192.168.2.100:6789 mounter:rbd-nbd adminid:admin csi.storage.k8s.io/node-publish-secret-namespace:storing csi.storage.k8s.io/provisioner-secret-name:csi-rbd-secret imageFeatures:layering imageFormat:2] Secrets:map[] VolumeContentSource:<nil> AccessibilityRequirements:<nil> XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0315 10:24:55.560802       1 controller.go:188] GRPC call: /csi.v1.Controller/CreateVolume
I0315 10:24:55.560829       1 controller.go:189] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-93b49cbf-470c-11e9-8f60-dac245f50f13","parameters":{"adminid":"admin","fsType":"ext4","imageFeatures":"layering","imageFormat":"2","monitors":"192.168.2.100:6789","mounter":"rbd-nbd","pool":"kubernetes","userid":"kubernetes"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]}
I0315 10:24:55.749229       1 controller.go:191] GRPC response: {"volume":{"capacity_bytes":1073741824,"volume_context":{"adminid":"admin","fsType":"ext4","imageFeatures":"layering","imageFormat":"2","monitors":"192.168.2.100:6789","mounter":"rbd-nbd","pool":"kubernetes","userid":"kubernetes"},"volume_id":"csi-rbd-vol-93c1ee26-470c-11e9-845a-0a580ae94410"}}
I0315 10:24:55.750185       1 controller.go:192] GRPC error: <nil>
I0315 10:24:55.750200       1 controller.go:605] create volume rep: {CapacityBytes:1073741824 VolumeId:csi-rbd-vol-93c1ee26-470c-11e9-845a-0a580ae94410 VolumeContext:map[imageFeatures:layering imageFormat:2 pool:kubernetes userid:kubernetes fsType:ext4 monitors:192.168.2.100:6789 mounter:rbd-nbd adminid:admin] ContentSource:<nil> AccessibleTopology:[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0315 10:24:55.750237       1 controller.go:666] successfully created PV {GCEPersistentDisk:nil AWSElasticBlockStore:nil HostPath:nil Glusterfs:nil NFS:nil RBD:nil ISCSI:nil Cinder:nil CephFS:nil FC:nil Flocker:nil FlexVolume:nil AzureFile:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil PortworxVolume:nil ScaleIO:nil Local:nil StorageOS:nil CSI:&CSIPersistentVolumeSource{Driver:csi-rbdplugin,VolumeHandle:csi-rbd-vol-93c1ee26-470c-11e9-845a-0a580ae94410,ReadOnly:false,FSType:ext4,VolumeAttributes:map[string]string{adminid: admin,fsType: ext4,imageFeatures: layering,imageFormat: 2,monitors: 192.168.2.100:6789,mounter: rbd-nbd,pool: kubernetes,storage.kubernetes.io/csiProvisionerIdentity: 1552638730277-8081-,userid: kubernetes,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:&SecretReference{Name:csi-rbd-secret,Namespace:storing,},}}
I0315 10:24:55.750372       1 controller.go:1026] provision "default/test-ceph-claim" class "csi-rbd": volume "pvc-93b49cbf-470c-11e9-8f60-dac245f50f13" provisioned
I0315 10:24:55.750414       1 controller.go:1040] provision "default/test-ceph-claim" class "csi-rbd": trying to save persistentvolume "pvc-93b49cbf-470c-11e9-8f60-dac245f50f13"
I0315 10:24:55.763326       1 controller.go:1047] provision "default/test-ceph-claim" class "csi-rbd": persistentvolume "pvc-93b49cbf-470c-11e9-8f60-dac245f50f13" saved
I0315 10:24:55.763376       1 controller.go:1088] provision "default/test-ceph-claim" class "csi-rbd": succeeded
I0315 10:24:55.763420       1 controller.go:926] provision "default/test-ceph-claim" class "csi-rbd": started
I0315 10:24:55.764262       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"test-ceph-claim", UID:"93b49cbf-470c-11e9-8f60-dac245f50f13", APIVersion:"v1", ResourceVersion:"715753", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-93b49cbf-470c-11e9-8f60-dac245f50f13
I0315 10:24:55.780290       1 controller.go:935] provision "default/test-ceph-claim" class "csi-rbd": persistentvolume "pvc-93b49cbf-470c-11e9-8f60-dac245f50f13" already exists, skipping

csi-attacher

I0315 10:24:26.184741       1 connection.go:242] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume
I0315 10:24:26.184752       1 connection.go:243] GRPC request: {"node_id":"node5","volume_id":"csi-rbd-vol-e2661a01-4709-11e9-845a-0a580ae94410"}
I0315 10:24:26.186677       1 connection.go:245] GRPC response: {}
I0315 10:24:26.187089       1 connection.go:246] GRPC error: <nil>
I0315 10:24:26.187104       1 csi_handler.go:377] Detached "csi-70a166b60e36f9460b6fc863769ad3a4f71beaeaf374a1103e88c8a8e81b52f7"
I0315 10:24:26.187155       1 util.go:71] Marking as detached "csi-70a166b60e36f9460b6fc863769ad3a4f71beaeaf374a1103e88c8a8e81b52f7"
I0315 10:24:26.200337       1 util.go:82] Finalizer removed from "csi-70a166b60e36f9460b6fc863769ad3a4f71beaeaf374a1103e88c8a8e81b52f7"
I0315 10:24:26.200409       1 csi_handler.go:164] Fully detached "csi-70a166b60e36f9460b6fc863769ad3a4f71beaeaf374a1103e88c8a8e81b52f7"
I0315 10:24:26.200426       1 csi_handler.go:109] CSIHandler: finished processing "csi-70a166b60e36f9460b6fc863769ad3a4f71beaeaf374a1103e88c8a8e81b52f7"
I0315 10:24:26.200701       1 controller.go:203] Started PV processing "pvc-e256a3f2-4709-11e9-8f60-dac245f50f13"
I0315 10:24:26.200722       1 csi_handler.go:418] CSIHandler: processing PV "pvc-e256a3f2-4709-11e9-8f60-dac245f50f13"
I0315 10:24:26.200773       1 csi_handler.go:460] CSIHandler: processing PV "pvc-e256a3f2-4709-11e9-8f60-dac245f50f13": no VA found, removing finalizer
I0315 10:24:26.210724       1 csi_handler.go:481] Removed finalizer from PV "pvc-e256a3f2-4709-11e9-8f60-dac245f50f13"
I0315 10:24:55.761773       1 controller.go:203] Started PV processing "pvc-93b49cbf-470c-11e9-8f60-dac245f50f13"
I0315 10:24:55.761816       1 csi_handler.go:418] CSIHandler: processing PV "pvc-93b49cbf-470c-11e9-8f60-dac245f50f13"
I0315 10:24:55.761835       1 csi_handler.go:422] CSIHandler: processing PV "pvc-93b49cbf-470c-11e9-8f60-dac245f50f13": no deletion timestamp, ignoring
I0315 10:24:55.776496       1 controller.go:203] Started PV processing "pvc-93b49cbf-470c-11e9-8f60-dac245f50f13"
I0315 10:24:55.776619       1 csi_handler.go:418] CSIHandler: processing PV "pvc-93b49cbf-470c-11e9-8f60-dac245f50f13"
I0315 10:24:55.776658       1 csi_handler.go:422] CSIHandler: processing PV "pvc-93b49cbf-470c-11e9-8f60-dac245f50f13": no deletion timestamp, ignoring
I0315 10:24:55.807430       1 controller.go:203] Started PV processing "pvc-93b49cbf-470c-11e9-8f60-dac245f50f13"
I0315 10:24:55.807569       1 csi_handler.go:418] CSIHandler: processing PV "pvc-93b49cbf-470c-11e9-8f60-dac245f50f13"
I0315 10:24:55.807605       1 csi_handler.go:422] CSIHandler: processing PV "pvc-93b49cbf-470c-11e9-8f60-dac245f50f13": no deletion timestamp, ignoring
I0315 10:25:05.361574       1 controller.go:173] Started VA processing "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:25:05.361610       1 csi_handler.go:93] CSIHandler: processing VA "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:25:05.361622       1 csi_handler.go:120] Attaching "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:25:05.361633       1 csi_handler.go:259] Starting attach operation for "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:25:05.361778       1 csi_handler.go:220] Adding finalizer to PV "pvc-93b49cbf-470c-11e9-8f60-dac245f50f13"
I0315 10:25:05.371109       1 csi_handler.go:228] PV finalizer added to "pvc-93b49cbf-470c-11e9-8f60-dac245f50f13"
I0315 10:25:05.372242       1 controller.go:203] Started PV processing "pvc-93b49cbf-470c-11e9-8f60-dac245f50f13"
I0315 10:25:05.372264       1 csi_handler.go:418] CSIHandler: processing PV "pvc-93b49cbf-470c-11e9-8f60-dac245f50f13"
I0315 10:25:05.372272       1 csi_handler.go:422] CSIHandler: processing PV "pvc-93b49cbf-470c-11e9-8f60-dac245f50f13": no deletion timestamp, ignoring
I0315 10:25:05.373466       1 csi_handler.go:524] Can't get CSINodeInfo node5: the server could not find the requested resource (get csinodeinfos.csi.storage.k8s.io node5)
I0315 10:25:05.373565       1 csi_handler.go:181] VA finalizer added to "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:25:05.373577       1 csi_handler.go:195] NodeID annotation added to "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:25:05.380691       1 csi_handler.go:205] VolumeAttachment "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae" updated with finalizer and/or NodeID annotation
I0315 10:25:05.380726       1 connection.go:242] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0315 10:25:05.380732       1 connection.go:243] GRPC request: {"node_id":"node5","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"adminid":"admin","fsType":"ext4","imageFeatures":"layering","imageFormat":"2","monitors":"192.168.2.100:6789","mounter":"rbd-nbd","pool":"kubernetes","storage.kubernetes.io/csiProvisionerIdentity":"1552638730277-8081-","userid":"kubernetes"},"volume_id":"csi-rbd-vol-93c1ee26-470c-11e9-845a-0a580ae94410"}
I0315 10:25:05.387284       1 connection.go:245] GRPC response: {}
I0315 10:25:05.387762       1 connection.go:246] GRPC error: <nil>
I0315 10:25:05.387776       1 csi_handler.go:133] Attached "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:25:05.387788       1 util.go:33] Marking as attached "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:25:05.396748       1 util.go:43] Marked as attached "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:25:05.396782       1 csi_handler.go:139] Fully attached "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:25:05.396799       1 csi_handler.go:109] CSIHandler: finished processing "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:25:05.396860       1 controller.go:173] Started VA processing "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:25:05.396875       1 csi_handler.go:93] CSIHandler: processing VA "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:25:05.396887       1 csi_handler.go:120] Attaching "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:25:05.396928       1 csi_handler.go:259] Starting attach operation for "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:25:05.397004       1 csi_handler.go:214] PV finalizer is already set on "pvc-93b49cbf-470c-11e9-8f60-dac245f50f13"
I0315 10:25:05.398421       1 csi_handler.go:524] Can't get CSINodeInfo node5: the server could not find the requested resource (get csinodeinfos.csi.storage.k8s.io node5)
I0315 10:25:05.398574       1 csi_handler.go:173] VA finalizer is already set on "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:25:05.398595       1 csi_handler.go:187] NodeID annotation is already set on "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:25:05.398614       1 connection.go:242] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0315 10:25:05.398622       1 connection.go:243] GRPC request: {"node_id":"node5","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"adminid":"admin","fsType":"ext4","imageFeatures":"layering","imageFormat":"2","monitors":"192.168.2.100:6789","mounter":"rbd-nbd","pool":"kubernetes","storage.kubernetes.io/csiProvisionerIdentity":"1552638730277-8081-","userid":"kubernetes"},"volume_id":"csi-rbd-vol-93c1ee26-470c-11e9-845a-0a580ae94410"}
I0315 10:25:05.402940       1 connection.go:245] GRPC response: {}
I0315 10:25:05.403462       1 connection.go:246] GRPC error: <nil>
I0315 10:25:05.403476       1 csi_handler.go:133] Attached "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:25:05.403492       1 util.go:33] Marking as attached "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:25:05.414834       1 csi_handler.go:103] Error processing "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae": failed to mark as attached: Operation cannot be fulfilled on volumeattachments.storage.k8s.io "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae": the object has been modified; please apply your changes to the latest version and try again
I0315 10:25:05.414909       1 controller.go:173] Started VA processing "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:25:05.414931       1 csi_handler.go:93] CSIHandler: processing VA "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:25:05.414947       1 csi_handler.go:115] "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae" is already attached
I0315 10:25:05.415008       1 csi_handler.go:109] CSIHandler: finished processing "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:25:05.420289       1 controller.go:173] Started VA processing "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:25:05.420330       1 csi_handler.go:93] CSIHandler: processing VA "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:25:05.420344       1 csi_handler.go:115] "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae" is already attached
I0315 10:25:05.420356       1 csi_handler.go:109] CSIHandler: finished processing "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:32:10.224852       1 controller.go:173] Started VA processing "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:32:10.224889       1 csi_handler.go:93] CSIHandler: processing VA "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"
I0315 10:32:10.224897       1 csi_handler.go:115] "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae" is already attached
I0315 10:32:10.224905       1 csi_handler.go:109] CSIHandler: finished processing "csi-81b4dfbce5cc2aa9aaa31bad716afa0ed087ca610ff3effbee80901ce49f4cae"

If any log information is missing or you guys would like to peep at the kubectl files, I am happy to provide it!

wilmardo commented 5 years ago

Is fixed in the latest canary, I will close the issue but leave it for reference, for who is searching trough the issues (like I did).

Sorry for opening the issue but I couldn't find any information about it :)

I0315 13:24:07.598278   20530 mount_linux.go:515] Attempting to determine if disk "/dev/nbd0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nbd0])
I0315 13:24:07.602091   20530 mount_linux.go:518] Output: "", err: exit status 2
I0315 13:24:07.602196   20530 mount_linux.go:489] Disk "/dev/nbd0" appears to be unformatted, attempting to format as type: "xfs" with options: [/dev/nbd0]
I0315 13:24:07.948260   20530 mount_linux.go:493] Disk successfully formatted (mkfs): xfs - /dev/nbd0 /var/lib/kubelet/pods/92e0efe4-4725-11e9-8f60-dac245f50f13/volumes/kubernetes.io~csi/pvc-839a3b86-4725-11e9-8f60-dac245f50f13/mount
I0315 13:24:07.948296   20530 nsenter_mount.go:80] nsenter mount /dev/nbd0 /var/lib/kubelet/pods/92e0efe4-4725-11e9-8f60-dac245f50f13/volumes/kubernetes.io~csi/pvc-839a3b86-4725-11e9-8f60-dac245f50f13/mount xfs [defaults]
I0315 13:24:07.948329   20530 nsenter.go:130] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/systemd-run --description=Kubernetes transient mount for /var/lib/kubelet/pods/92e0efe4-4725-11e9-8f60-dac245f50f13/volumes/kubernetes.io~csi/pvc-839a3b86-4725-11e9-8f60-dac245f50f13/mount --scope -- /bin/mount -t xfs -o defaults /dev/nbd0 /var/lib/kubelet/pods/92e0efe4-4725-11e9-8f60-dac245f50f13/volumes/kubernetes.io~csi/pvc-839a3b86-4725-11e9-8f60-dac245f50f13/mount]
james58899 commented 5 years ago

I have same issue can success mount first volume but second will failed it trying find all nbd pid and error when mount first volume and try reuse nbd0 when mount second volume

I0508 15:21:03.326551   21961 rbd_attach.go:105] found nbds max parameters file at /sys/module/nbd/parameters/nbds_max 
I0508 15:21:03.326613   21961 rbd_attach.go:117] rbd-nbd: max_nbds: 16 
I0508 15:21:03.326653   21961 rbd_attach.go:159] did not find valid pid file in dir /sys/block/nbd0: open /sys/block/nbd0/pid: no such file or directory 
I0508 15:21:03.326698   21961 rbd_attach.go:159] did not find valid pid file in dir /sys/block/nbd1: open /sys/block/nbd1/pid: no such file or directory 
I0508 15:21:03.326755   21961 rbd_attach.go:159] did not find valid pid file in dir /sys/block/nbd2: open /sys/block/nbd2/pid: no such file or directory 
I0508 15:21:03.326806   21961 rbd_attach.go:159] did not find valid pid file in dir /sys/block/nbd3: open /sys/block/nbd3/pid: no such file or directory 
I0508 15:21:03.326834   21961 rbd_attach.go:159] did not find valid pid file in dir /sys/block/nbd4: open /sys/block/nbd4/pid: no such file or directory 
I0508 15:21:03.326887   21961 rbd_attach.go:159] did not find valid pid file in dir /sys/block/nbd5: open /sys/block/nbd5/pid: no such file or directory 
I0508 15:21:03.326939   21961 rbd_attach.go:159] did not find valid pid file in dir /sys/block/nbd6: open /sys/block/nbd6/pid: no such file or directory 
I0508 15:21:03.326997   21961 rbd_attach.go:159] did not find valid pid file in dir /sys/block/nbd7: open /sys/block/nbd7/pid: no such file or directory 
I0508 15:21:03.327033   21961 rbd_attach.go:159] did not find valid pid file in dir /sys/block/nbd8: open /sys/block/nbd8/pid: no such file or directory 
I0508 15:21:03.327088   21961 rbd_attach.go:159] did not find valid pid file in dir /sys/block/nbd9: open /sys/block/nbd9/pid: no such file or directory 
I0508 15:21:03.327133   21961 rbd_attach.go:159] did not find valid pid file in dir /sys/block/nbd10: open /sys/block/nbd10/pid: no such file or directory 
I0508 15:21:03.327173   21961 rbd_attach.go:159] did not find valid pid file in dir /sys/block/nbd11: open /sys/block/nbd11/pid: no such file or directory 
I0508 15:21:03.327203   21961 rbd_attach.go:159] did not find valid pid file in dir /sys/block/nbd12: open /sys/block/nbd12/pid: no such file or directory 
I0508 15:21:03.327234   21961 rbd_attach.go:159] did not find valid pid file in dir /sys/block/nbd13: open /sys/block/nbd13/pid: no such file or directory 
I0508 15:21:03.327285   21961 rbd_attach.go:159] did not find valid pid file in dir /sys/block/nbd14: open /sys/block/nbd14/pid: no such file or directory 
I0508 15:21:03.327342   21961 rbd_attach.go:159] did not find valid pid file in dir /sys/block/nbd15: open /sys/block/nbd15/pid: no such file or directory 

full log: container.log

wilmardo commented 5 years ago

@james58899 Have you tried with the canary tag? That fixed the problem for me :)

james58899 commented 5 years ago

@wilmardo I tried change rbdplugin using canary tag but did not help

Madhu-1 commented 5 years ago

@james58899 is there a way we can reproduce this issue?

james58899 commented 5 years ago

@Madhu-1 I am not sure how reproduce but I set mounter: rbd-nbd in storageclass will cause this issue.

Cluster info: OS: Ubuntu 18.04.2 Docker: 18.09.5 Kubernetes: 1.13.5 (kubeadm) Ceph: 14.2.1 (rook 1.0.0)

Madhu-1 commented 5 years ago

@james58899 if possible i will try to reproduce and update the issue

james58899 commented 5 years ago

@Madhu-1 Can be reproduce or need more information?

Madhu-1 commented 5 years ago

@james58899 forgot about this issue. I will test this out shortly.