Closed ygao-armada closed 3 weeks ago
After I install lvm2 in the osImage, the osd pods are created successfully:
$ kubectl -n rook-ceph get pod
...
rook-ceph-osd-0-556d6d75f9-l6pbz 2/2 Running 0 9m14s
rook-ceph-osd-1-59c4c76ccc-6wwpv 2/2 Running 0 8m45s
rook-ceph-osd-2-54dddf59bf-r69m8 2/2 Running 0 7m58s
rook-ceph-osd-3-696d9dd87b-5wh4w 2/2 Running 0 7m58s
In the node, we can see:
# lsblk -f
NAME FSTYPE LABEL UUID FSAVAIL FSUSE% MOUNTPOINT
...
sdd LVM2_m QSU1CB-Vxkn-jXah-Rufx-MhiB-IWKu-6z8sN0
└─ceph--bad12e0b--fe26--44e9--897a--10cfe0ac0d50-osd--block--5268c673--8ffb--4a19--ac9a--c8a49e96a2e2
└─4ttf0w-cSmK-XB06-Vgum-kWoE-MtTk-lsr1e9
sde LVM2_m gMrIC4-EL9a-ON47-UIFz-7uDd-RY8U-2bwbxX
└─ceph--3e3400bb--073b--43a6--9759--0735fb4bf8fd-osd--block--891a74d0--ba9c--4eb3--8a39--8eff473015ec
└─ebxL39-JSUq-NZYY-NBEC-GuSM-Ai1e-V044mU
sdf LVM2_m GnUl5A-LI89-qgBB-t2sx-Pj3w-bIxb-UatsxQ
└─ceph--8ee5f47b--695c--4dda--9dab--1e0b16579f62-osd--block--dac48c85--c1f6--494d--b74b--fae0919810eb
└─K9vETz-8Afu-rvuL-3YZd-1rd0-aQg5-MjeTwS
...
Is this a bug report or feature request? Bug report
I failed with 2 tries:
The former has RUNNING rook-ceph-osd-prepare-xxx pods, and such rook-ceph-osd-prepare logs:
The latter has CrashLoopBackOff rook-ceph-osd-prepare-xxx pods, and such rook-ceph-osd-prepare logs:
Then I try to copy lvm to /usr/sbin/lvm, I get this logs:
Deviation from expected behavior: no osd created Expected behavior: osd created properly
How to reproduce it (minimal and precise):
Just run above commands with EKS anywhere bare metal cluster with ubuntu 20.04 (to be honest, I'm afraid it's a general issue)
File(s) to submit:
cluster.yaml
, if necessary available upon requestLogs to submit: mentioned above
Crashing pod(s) logs, if necessary
To get logs, use
kubectl -n <namespace> logs <pod name>
When pasting logs, always surround them with backticks or use theinsert code
button from the Github UI. Read GitHub documentation if you need help.Cluster Status to submit:
Output of kubectl commands, if necessary
To get the health of the cluster, use
kubectl rook-ceph health
To get the status of the cluster, usekubectl rook-ceph ceph status
For more details, see the Rook kubectl PluginEnvironment:
uname -a
): 5.4.0-177-genericrook version
inside of a Rook Pod): v1.13.6ceph -v
):kubectl version
):ceph health
in the Rook Ceph toolbox):