Closed apacoco9861 closed 6 years ago
root@pro-docker-2-62:~/ceph-helm/ceph# helm status ceph LAST DEPLOYED: Mon Oct 15 11:10:56 2018 NAMESPACE: ceph STATUS: DEPLOYED
RESOURCES: ==> v1/ConfigMap NAME DATA AGE ceph-bin-clients 2 33m ceph-bin 26 33m ceph-etc 1 33m ceph-templates 5 33m
==> v1/StorageClass NAME PROVISIONER AGE ceph-rbd ceph.com/rbd 33m
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ceph-mon ClusterIP None
==> v1beta1/DaemonSet NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE ceph-mon 0 0 0 0 0 ceph-mon=enabled 33m ceph-osd-dev-sdb 9 9 0 9 0 ceph-osd-device-dev-sdb=enabled,ceph-osd=enabled 33m
==> v1beta1/Deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE ceph-mds 1 1 1 0 33m ceph-mgr 1 1 1 0 33m ceph-mon-check 1 1 1 0 33m ceph-rbd-provisioner 2 2 2 2 33m ceph-rgw 1 1 1 0 33m
==> v1/Job NAME DESIRED SUCCESSFUL AGE ceph-mds-keyring-generator 1 0 33m ceph-osd-keyring-generator 1 0 33m ceph-rgw-keyring-generator 1 0 33m ceph-mgr-keyring-generator 1 0 33m ceph-mon-keyring-generator 1 0 33m ceph-namespace-client-key-generator 1 0 33m ceph-storage-keys-generator 1 0 33m
==> v1/Pod(related) NAME READY STATUS RESTARTS AGE ceph-osd-dev-sdb-4fd4c 0/1 Init:0/3 0 33m ceph-osd-dev-sdb-6g4wl 0/1 Init:0/3 0 33m ceph-osd-dev-sdb-dc82d 0/1 Init:0/3 0 33m ceph-osd-dev-sdb-g6rh2 0/1 Init:0/3 0 33m ceph-osd-dev-sdb-gfvpn 0/1 Init:0/3 0 33m ceph-osd-dev-sdb-j7lkd 0/1 Init:0/3 0 33m ceph-osd-dev-sdb-jsf5t 0/1 Init:0/3 0 33m ceph-osd-dev-sdb-nm4jd 0/1 Init:0/3 0 33m ceph-osd-dev-sdb-tbggt 0/1 Init:0/3 0 33m ceph-mds-68c79b5cc-2qs69 0/1 Pending 0 33m ceph-mgr-6c687f5964-7fxqb 0/1 Pending 0 33m ceph-mon-check-676d984874-x4vdp 0/1 Pending 0 33m ceph-rbd-provisioner-5b9bfb859d-nzxvg 1/1 Running 0 33m ceph-rbd-provisioner-5b9bfb859d-stkw5 1/1 Running 0 33m ceph-rgw-6d946b-tsxb9 0/1 Pending 0 33m ceph-mds-keyring-generator-2ct5f 0/1 Pending 0 33m ceph-osd-keyring-generator-9dbd5 0/1 Pending 0 33m ceph-rgw-keyring-generator-6vbwm 0/1 Pending 0 33m ceph-mgr-keyring-generator-9ml5w 0/1 Pending 0 33m ceph-mon-keyring-generator-7rkvz 0/1 Pending 0 33m ceph-namespace-client-key-generator-b55gp 0/1 Pending 0 33m ceph-storage-keys-generator-j6csw 0/1 Pending 0 33m
==> v1/Secret NAME TYPE DATA AGE ceph-keystone-user-rgw Opaque 7 33m
root@pro-docker-2-62:~/ceph-helm/ceph# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
pro-docker-2-178 Ready
use Rook instead of helm https://rook.io/docs/rook/v0.8/ceph-quickstart.html
Is this a request for help?: yes
Is this a BUG REPORT or FEATURE REQUEST? (choose one):BUG REPORT
Version of Helm and Kubernetes: kubeadm version 1.12.1 helm 2.9.1
Which chart: Just follow the http://docs.ceph.com/docs/master/start/kube-helm/ Would like to have osd on every node /dev/sdb
What happened: tried many times, the osd pod can't start Events: Type Reason Age From Message
Normal Scheduled 17m default-scheduler Successfully assigned ceph/ceph-osd-dev-sdb-4fd4c to pro-docker-2-64 Warning FailedMount 17m (x5 over 17m) kubelet, pro-docker-2-64 MountVolume.SetUp failed for volume "ceph-bootstrap-osd-keyring" : secret "ceph-bootstrap-osd-keyring" not found Warning FailedMount 17m (x5 over 17m) kubelet, pro-docker-2-64 MountVolume.SetUp failed for volume "ceph-bootstrap-mds-keyring" : secret "ceph-bootstrap-mds-keyring" not found Warning FailedMount 17m (x5 over 17m) kubelet, pro-docker-2-64 MountVolume.SetUp failed for volume "ceph-mon-keyring" : secret "ceph-mon-keyring" not found Warning FailedMount 11m (x11 over 17m) kubelet, pro-docker-2-64 MountVolume.SetUp failed for volume "ceph-client-admin-keyring" : secret "ceph-client-admin-keyring" not found Warning FailedMount 7m8s (x13 over 17m) kubelet, pro-docker-2-64 MountVolume.SetUp failed for volume "ceph-bootstrap-rgw-keyring" : secret "ceph-bootstrap-rgw-keyring" not found Warning FailedMount 103s (x7 over 15m) kubelet, pro-docker-2-64 Unable to mount volumes for pod "ceph-osd-dev-sdb-4fd4c_ceph(ede3590e-d027-11e8-a389-005056b224f1)": timeout expired waiting for volumes to attach or mount for pod "ceph"/"ceph-osd-dev-sdb-4fd4c". list of unmounted volumes=[ceph-client-admin-keyring ceph-mon-keyring ceph-bootstrap-osd-keyring ceph-bootstrap-mds-keyring ceph-bootstrap-rgw-keyring]. list of unattached volumes=[devices pod-var-lib-ceph pod-run ceph-bin ceph-etc ceph-client-admin-keyring ceph-mon-keyring ceph-bootstrap-osd-keyring ceph-bootstrap-mds-keyring ceph-bootstrap-rgw-keyring run-udev default-token-k2x7h]
What you expected to happen: expect to run 'helm status ceph' and see ceph ready to deploy.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know: I am using vmware to add virtual disk /dev/sdb. entire k8s master and node on Ubuntu18.04 with latest update.
Many thanks,