Closed GOVYANSONG closed 5 years ago
After much struggle, the ubuntu pod was finally created:
Normal SuccessfulMountVolume 28s (x2 over 29s) kubelet, fop-gaoyans3 MapVolume.MapDevice succeeded for volume "ceph-block-pv" globalMapPath "/var/lib/kubelet/plugins/kubernetes.io/rbd/volumeDevices/replicapool-image-tstimg" Normal SuccessfulMountVolume 28s (x2 over 29s) kubelet, fop-gaoyans3 MapVolume.MapDevice succeeded for volume "ceph-block-pv" volumeMapPath "/var/lib/kubelet/pods/64f934bf-e293-4269-8e81-218b71fc8ceb/volumeDevices/kubernetes.io~rbd" Normal Pulling 25s kubelet, fop-gaoyans3 Pulling image "virtlet.cloud/cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img" Normal Pulled 5s kubelet, fop-gaoyans3 Successfully pulled image "virtlet.cloud/cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img" Normal Created 4s kubelet, fop-gaoyans3 Created container ubuntu-vm Normal Started 3s kubelet, fop-gaoyans3 Started container ubuntu-vm
Create rook-ceph cluster for test purpose e.g. sample yaml files with word 'test' in the file name
Install rook-ceph-tools
Create a pool
apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: replicapool namespace: rook-ceph spec: failureDomain: host replicated: size: 1 deviceClass: hdd
apiVersion: v1 kind: Secret metadata: name: ceph-admin namespace: default data:
adminID: YWRtaW4=
adminKey: QVFEQm9KdGRwWklOT0JBQXlWUCthZStjSDVlRGJMcTFQazltSUE9PQ== userID: YWRtaW4= userKey: QVFEQm9KdGRwWklOT0JBQXlWUCthZStjSDVlRGJMcTFQazltSUE9PQ==
Create image: rbd create --size 5 replicapool/tstimg
Disable warning about feature mismatch: rbd feature disable replicapool/tstimg object-map fast-diff deep-flatten
Updated version of ubuntu-vm-rbd-block-pv.yaml:
apiVersion: v1 kind: PersistentVolume metadata: name: ceph-block-pv spec: accessModes:
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ceph-block-pvc spec: accessModes:
apiVersion: v1 kind: Pod metadata: name: ubuntu-vm-rdb-block-pv annotations: kubernetes.io/target-runtime: virtlet.cloud VirtletSSHKeys: | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCaJEcFDXEK2ZbX0ZLS1EIYFZRbDAcRfuVjpstSc0De8+sV1aiu+dePxdkuDRwqFtCyk6dEZkssjOkBXtri00MECLkir6FcH3kKOJtbJ6vy3uaJc9w1ERo+wyl6SkAh/+JTJkp7QRXj8oylW5E20LsbnA/dIwWzAF51PPwF7A7FtNg9DnwPqMkxFo1Th/buOMKbP5ZA1mmNNtmzbMpMfJATvVyiv3ccsSJKOiyQr6UG+j7sc/7jMVz5Xk34Vd0l8GwcB0334MchHckmqDB142h/NCWTr8oLakDNvkfC1YneAfAO41hDkUbxPtVBG5M/o7P4fxoqiHEX+ZLfRxDtHB53 me@localhost
VirtletCloudInitUserData: | mounts:
terminationGracePeriodSeconds: 120 containers:
kubectl attach -t
to worktty: true stdin: true volumeDevices:
Created a ceph cluster on k8s with rook-ceph. Followed instructions given at https://github.com/Mirantis/virtlet/blob/master/examples/ubuntu-vm-rbd-block-pv.yaml. After made necessary changes to my environment, the ubuntu container is stuck creation stage. Describe the pod gave following info:
Name: ubuntu-vm-rdb-block-pv Namespace: default Priority: 0 Node: fop-gaoyans3/172.20.100.252 Start Time: Tue, 08 Oct 2019 18:49:26 +0000 Labels:
Annotations: VirtletCloudInitUserData:
mounts:
Warning FailedScheduling default-scheduler pod has unbound immediate PersistentVolumeClaims
Normal Scheduled default-scheduler Successfully assigned default/ubuntu-vm-rdb-block-pv to fop-gaoyans3
Warning FailedMount 17m kubelet, fop-gaoyans3 Unable to attach or mount volumes: unmounted volumes=[ceph-block-pvc], unattached volumes=[ceph-block-pvc de fault-token-7pcqp]: timed out waiting for the condition
Warning FailedMapVolume 13m kubelet, fop-gaoyans3 MapVolume.WaitForAttach failed for volume "ceph-block-pv" : fail to check rbd image status with: (exit statu s 22), rbd output: (2019-10-08 19:01:51.118364 7f3aa4b3f0c0 -1 did not load config file, using default settings.
2019-10-08 19:01:51.123118 7f3aa4b3f0c0 -1 Errors while parsing config file!
2019-10-08 19:01:51.123129 7f3aa4b3f0c0 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2019-10-08 19:01:51.123130 7f3aa4b3f0c0 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2019-10-08 19:01:51.123130 7f3aa4b3f0c0 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2019-10-08 19:01:51.124085 7f3aa4b3f0c0 -1 Errors while parsing config file!
2019-10-08 19:01:51.124096 7f3aa4b3f0c0 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2019-10-08 19:01:51.124097 7f3aa4b3f0c0 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2019-10-08 19:01:51.124104 7f3aa4b3f0c0 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2019-10-08 19:01:51.607281 7f3aa4b3f0c0 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring .bin,: (2) No such file or directory
2019-10-08 19:01:51.617790 7f3aa4b3f0c0 -1 auth: failed to decode key 'admin'
2019-10-08 19:01:51.617811 7f3aa4b3f0c0 0 librados: client.admin initialization error (22) Invalid argument
rbd: couldn't connect to the cluster!
)
Warning FailedMount 3m46s (x9 over 24m) kubelet, fop-gaoyans3 Unable to attach or mount volumes: unmounted volumes=[ceph-block-pvc], unattached volumes=[default -token-7pcqp ceph-block-pvc]: timed out waiting for the condition
Warning FailedMapVolume 3m42s (x18 over 26m) kubelet, fop-gaoyans3 MapVolume.WaitForAttach failed for volume "ceph-block-pv" : fail to check rbd image status with: ( fork/exec /usr/bin/rbd: invalid argument), rbd output: ()
Has anyone encountered this issue before and want to share insights?