Closed obi12341 closed 3 years ago
Ping - sounds familiar.
same here
2020-11-27 18:04:14.160738 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/sdb /dev/sda --db-devices /dev/md0p3 --report
2020-11-27 18:04:21.406855 D | exec: --> passed data devices: 2 physical, 0 LVM
2020-11-27 18:04:21.406920 D | exec: --> relative data size: 1.0
2020-11-27 18:04:21.407166 D | exec: --> passed block_db devices: 0 physical, 1 LVM
2020-11-27 18:04:21.408214 D | exec: Traceback (most recent call last):
2020-11-27 18:04:21.408238 D | exec: File "/usr/sbin/ceph-volume", line 9, in
works without "--yes" in ceph-volume params now trying to build patched version
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.
still no solution for this?
There is a related discussion in #7121. There is a PR in progress in the ceph repo that at a glance will help with this scenario as well.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.
Just in case, I had this issue with LVM volumes for --db-devices
(filled by rook's metadataDevice).
Looking through the Ceph ceph-volume source code, in particular class ceph_volume.util.device.Device
, I found specifying the logical volume in "vg/lv" format works.
So use for example metadataDevice: "vg-metadata-0/metadata-0-2"
(instead of metadataDevice: "/dev/vg-metadata-0/metadata-0-2"
or metadataDevice: "/dev/dm-2"
).
@lyind this comment to the rescue! Been trying to find a solution for this for many hours. This trick did it, thanks!
I had a similar but different error with IndexError (I was trying to install OSD to lvm with pre-existing lv using ceph-ansible, branch for ceph octopus), I fixed it by using OSD drives config like this (example):
lvm_volumes:
@lyind's answer made me re-think on how to specify vg/lv pairs. Maybe this will help someone since this issue is coming up in google search
Is this a bug report or feature request?
Deviation from expected behavior: OSD is not created
Expected behavior: OSD should be recreated
How to reproduce it (minimal and precise): We had a failing device and want to replace it. We followed your instructions in the docs, but I think we have kind of a diffrent setup, because we use a metadatadevice. When the osd on the new device is beeing recreated ceph-volume produces an error because, /dev/sda (which is the metadatadevice) is locked. So far I understand this is the right behaviour, but can you describe a way to replace a disk with configured metadatadevice.
File(s) to submit:
cluster.yaml
, if necessaryCrashing pod(s) logs, if necessary
To get logs, use
kubectl -n <namespace> logs <pod name>
When pasting logs, always surround them with backticks or use theinsert code
button from the Github UI. Read Github documentation if you need help.Environment:
uname -a
): Linux de-her-k8s-mgmt-host 4.15.0-99-generic #100-Ubuntu SMP Wed Apr 22 20:32:56 UTC 2020 x86_64 x86_64 x86_64 GNU/Linuxrook version
inside of a Rook Pod): 1.4.7ceph -v
): 15.2.5-0kubectl version
): v1.19.4ceph health
in the Rook Ceph toolbox):Prepare Job Log:
Cluster: