Closed c0r3dump3d closed 5 years ago
@c0r3dump3d please share all group_vars/* content, inventory hostfile and full playbook log
we will look at this error but note that in 3.2 we encourage to use ceph-volume instead of ceph-disk which is here mostly for backward compatibility.
The output you pasted shows a /dev/sdaq device but doesn't seem to exist in your devices
variable.
It's a bit confusing, please clarify
Hi, thanks you for the answer, because I have multiples OSD servers with JBOD with multiple disks sd* assigments I have made symbolic links for the homogenization of the disks names
The /dev/sdaq correspond to /dev/sdzo declared in osds.yml.
On the other hand I have tried to use bluestore with lvm and with the SSD disk for db and wal data, but I have seen that ceph-ansible has a lv-create playbook in order to create the previosly lvm configuration in the disk, but I saw that this playbook only produce the configuration for filestore. Do you know if there's anything equivalent for bluestore?
This is inventory hostfile:
[mons] mon21 ansible_host=10.141.1.241 mon22 ansible_host=10.141.1.242 mon23 ansible_host=10.141.1.243
[osds] osd21 ansible_host=10.141.1.244 osd22 ansible_host=10.141.1.245 osd23 ansible_host=10.141.1.246 osd24 ansible_host=10.141.1.247 osd25 ansible_host=10.141.1.248 osd26 ansible_host=10.141.1.249 osd27 ansible_host=10.141.1.250 osd28 ansible_host=10.141.1.251 osd29 ansible_host=10.141.1.252 osd30 ansible_host=10.141.1.253
[mdss] mon21 ansible_host=10.141.1.241 mon22 ansible_host=10.141.1.242 mon23 ansible_host=10.141.1.243
[mgrs] mon21 ansible_host=10.141.1.241 mon22 ansible_host=10.141.1.242 mon23 ansible_host=10.141.1.243
And log of the playbook:
I've tried to reproduced this issue with no luck.
So I created symlink to the real devices:
# find /dev/sd* -type l -ls
40275 0 lrwxrwxrwx 1 root root 8 Mar 4 19:24 /dev/sdya -> /dev/sdb
38565 0 lrwxrwxrwx 1 root root 8 Mar 4 19:24 /dev/sdyb -> /dev/sdc
40280 0 lrwxrwxrwx 1 root root 8 Mar 4 19:24 /dev/sdyc -> /dev/sdd
40281 0 lrwxrwxrwx 1 root root 8 Mar 4 19:24 /dev/sdyd -> /dev/sde
38570 0 lrwxrwxrwx 1 root root 8 Mar 4 19:24 /dev/sdye -> /dev/sdf
40284 0 lrwxrwxrwx 1 root root 8 Mar 4 19:24 /dev/sdyf -> /dev/sdg
38573 0 lrwxrwxrwx 1 root root 8 Mar 4 19:25 /dev/sdyg -> /dev/sdh
40287 0 lrwxrwxrwx 1 root root 8 Mar 4 19:25 /dev/sdyh -> /dev/sdi
38577 0 lrwxrwxrwx 1 root root 8 Mar 4 19:25 /dev/sdyi -> /dev/sdj
40290 0 lrwxrwxrwx 1 root root 8 Mar 4 19:25 /dev/sdyj -> /dev/sdk
39464 0 lrwxrwxrwx 1 root root 8 Mar 4 19:25 /dev/sdyk -> /dev/sdl
37731 0 lrwxrwxrwx 1 root root 8 Mar 4 19:25 /dev/sdyl -> /dev/sdm
39467 0 lrwxrwxrwx 1 root root 8 Mar 4 19:25 /dev/sdym -> /dev/sdn
37734 0 lrwxrwxrwx 1 root root 8 Mar 4 19:25 /dev/sdyn -> /dev/sdo
39470 0 lrwxrwxrwx 1 root root 8 Mar 4 19:26 /dev/sdyo -> /dev/sdp
40295 0 lrwxrwxrwx 1 root root 8 Mar 4 19:26 /dev/sdyp -> /dev/sdq
39473 0 lrwxrwxrwx 1 root root 8 Mar 4 19:26 /dev/sdyq -> /dev/sdr
40306 0 lrwxrwxrwx 1 root root 8 Mar 4 19:26 /dev/sdyr -> /dev/sdu
38591 0 lrwxrwxrwx 1 root root 8 Mar 4 19:27 /dev/sdys -> /dev/sdv
40309 0 lrwxrwxrwx 1 root root 8 Mar 4 19:27 /dev/sdyt -> /dev/sdw
38594 0 lrwxrwxrwx 1 root root 8 Mar 4 19:27 /dev/sdyu -> /dev/sdx
40313 0 lrwxrwxrwx 1 root root 8 Mar 4 19:27 /dev/sdyv -> /dev/sdy
39474 0 lrwxrwxrwx 1 root root 8 Mar 4 19:27 /dev/sdyw -> /dev/sdz
37741 0 lrwxrwxrwx 1 root root 8 Mar 4 19:29 /dev/sdzy -> /dev/sds
39479 0 lrwxrwxrwx 1 root root 8 Mar 4 19:29 /dev/sdzz -> /dev/sdt
And with this configuration it's working fine:
osd_scenario: non-collocated
osd_objectstore: bluestore
devices:
- /dev/sdya
- /dev/sdyb
- /dev/sdyc
- /dev/sdyd
- /dev/sdye
- /dev/sdyf
- /dev/sdyg
- /dev/sdyh
- /dev/sdyi
- /dev/sdyj
- /dev/sdyk
- /dev/sdyl
- /dev/sdym
- /dev/sdyn
- /dev/sdyo
- /dev/sdyp
- /dev/sdyq
- /dev/sdyr
- /dev/sdys
- /dev/sdyt
- /dev/sdyu
- /dev/sdyv
- /dev/sdyw
dedicated_devices:
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
As a result I can see db and wal on the dedicated devices:
/dev/sdb :
/dev/sdb1 ceph data, active, cluster ceph, osd.0, block /dev/sdb2, block.db /dev/sds1, block.wal /dev/sds2
/dev/sdb2 ceph block, for /dev/sdb1
/dev/sdc :
/dev/sdc1 ceph data, active, cluster ceph, osd.1, block /dev/sdc2, block.db /dev/sds3, block.wal /dev/sds4
/dev/sdc2 ceph block, for /dev/sdc1
/dev/sdd :
/dev/sdd1 ceph data, active, cluster ceph, osd.2, block /dev/sdd2, block.db /dev/sds5, block.wal /dev/sds6
/dev/sdd2 ceph block, for /dev/sdd1
/dev/sde :
/dev/sde1 ceph data, active, cluster ceph, osd.3, block /dev/sde2, block.db /dev/sds7, block.wal /dev/sds8
/dev/sde2 ceph block, for /dev/sde1
/dev/sdf :
/dev/sdf1 ceph data, active, cluster ceph, osd.4, block /dev/sdf2, block.db /dev/sds9, block.wal /dev/sds10
/dev/sdf2 ceph block, for /dev/sdf1
/dev/sdg :
/dev/sdg1 ceph data, active, cluster ceph, osd.5, block /dev/sdg2, block.db /dev/sds11, block.wal /dev/sds12
/dev/sdg2 ceph block, for /dev/sdg1
/dev/sdh :
/dev/sdh1 ceph data, active, cluster ceph, osd.6, block /dev/sdh2, block.db /dev/sds13, block.wal /dev/sds14
/dev/sdh2 ceph block, for /dev/sdh1
/dev/sdi :
/dev/sdi1 ceph data, active, cluster ceph, osd.7, block /dev/sdi2, block.db /dev/sds15, block.wal /dev/sds16
/dev/sdi2 ceph block, for /dev/sdi1
/dev/sdj :
/dev/sdj1 ceph data, active, cluster ceph, osd.8, block /dev/sdj2, block.db /dev/sds17, block.wal /dev/sds18
/dev/sdj2 ceph block, for /dev/sdj1
/dev/sdk :
/dev/sdk1 ceph data, active, cluster ceph, osd.9, block /dev/sdk2, block.db /dev/sds19, block.wal /dev/sds20
/dev/sdk2 ceph block, for /dev/sdk1
/dev/sdl :
/dev/sdl1 ceph data, active, cluster ceph, osd.10, block /dev/sdl2, block.db /dev/sds21, block.wal /dev/sds22
/dev/sdl2 ceph block, for /dev/sdl1
/dev/sdm :
/dev/sdm1 ceph data, active, cluster ceph, osd.11, block /dev/sdm2, block.db /dev/sds23, block.wal /dev/sds24
/dev/sdm2 ceph block, for /dev/sdm1
/dev/sdn :
/dev/sdn1 ceph data, active, cluster ceph, osd.12, block /dev/sdn2, block.db /dev/sdt1, block.wal /dev/sdt2
/dev/sdn2 ceph block, for /dev/sdn1
/dev/sdo :
/dev/sdo1 ceph data, active, cluster ceph, osd.13, block /dev/sdo2, block.db /dev/sdt3, block.wal /dev/sdt4
/dev/sdo2 ceph block, for /dev/sdo1
/dev/sdp :
/dev/sdp1 ceph data, active, cluster ceph, osd.14, block /dev/sdp2, block.db /dev/sdt5, block.wal /dev/sdt6
/dev/sdp2 ceph block, for /dev/sdp1
/dev/sdq :
/dev/sdq1 ceph data, active, cluster ceph, osd.15, block /dev/sdq2, block.db /dev/sdt7, block.wal /dev/sdt8
/dev/sdq2 ceph block, for /dev/sdq1
/dev/sdr :
/dev/sdr1 ceph data, active, cluster ceph, osd.16, block /dev/sdr2, block.db /dev/sdt9, block.wal /dev/sdt10
/dev/sdr2 ceph block, for /dev/sdr1
/dev/sds :
/dev/sds1 ceph block.db, for /dev/sdb1
/dev/sds10 ceph block.wal, for /dev/sdf1
/dev/sds11 ceph block.db, for /dev/sdg1
/dev/sds12 ceph block.wal, for /dev/sdg1
/dev/sds13 ceph block.db, for /dev/sdh1
/dev/sds14 ceph block.wal, for /dev/sdh1
/dev/sds15 ceph block.db, for /dev/sdi1
/dev/sds16 ceph block.wal, for /dev/sdi1
/dev/sds17 ceph block.db, for /dev/sdj1
/dev/sds18 ceph block.wal, for /dev/sdj1
/dev/sds19 ceph block.db, for /dev/sdk1
/dev/sds2 ceph block.wal, for /dev/sdb1
/dev/sds20 ceph block.wal, for /dev/sdk1
/dev/sds21 ceph block.db, for /dev/sdl1
/dev/sds22 ceph block.wal, for /dev/sdl1
/dev/sds23 ceph block.db, for /dev/sdm1
/dev/sds24 ceph block.wal, for /dev/sdm1
/dev/sds3 ceph block.db, for /dev/sdc1
/dev/sds4 ceph block.wal, for /dev/sdc1
/dev/sds5 ceph block.db, for /dev/sdd1
/dev/sds6 ceph block.wal, for /dev/sdd1
/dev/sds7 ceph block.db, for /dev/sde1
/dev/sds8 ceph block.wal, for /dev/sde1
/dev/sds9 ceph block.db, for /dev/sdf1
/dev/sdt :
/dev/sdt1 ceph block.db, for /dev/sdn1
/dev/sdt10 ceph block.wal, for /dev/sdr1
/dev/sdt11 ceph block.db, for /dev/sdu1
/dev/sdt12 ceph block.wal, for /dev/sdu1
/dev/sdt13 ceph block.db, for /dev/sdv1
/dev/sdt14 ceph block.wal, for /dev/sdv1
/dev/sdt15 ceph block.db, for /dev/sdw1
/dev/sdt16 ceph block.wal, for /dev/sdw1
/dev/sdt17 ceph block.db, for /dev/sdx1
/dev/sdt18 ceph block.wal, for /dev/sdx1
/dev/sdt19 ceph block.db, for /dev/sdy1
/dev/sdt2 ceph block.wal, for /dev/sdn1
/dev/sdt20 ceph block.wal, for /dev/sdy1
/dev/sdt21 ceph block.db, for /dev/sdz1
/dev/sdt22 ceph block.wal, for /dev/sdz1
/dev/sdt3 ceph block.db, for /dev/sdo1
/dev/sdt4 ceph block.wal, for /dev/sdo1
/dev/sdt5 ceph block.db, for /dev/sdp1
/dev/sdt6 ceph block.wal, for /dev/sdp1
/dev/sdt7 ceph block.db, for /dev/sdq1
/dev/sdt8 ceph block.wal, for /dev/sdq1
/dev/sdt9 ceph block.db, for /dev/sdr1
/dev/sdu :
/dev/sdu1 ceph data, active, cluster ceph, osd.17, block /dev/sdu2, block.db /dev/sdt11, block.wal /dev/sdt12
/dev/sdu2 ceph block, for /dev/sdu1
/dev/sdv :
/dev/sdv1 ceph data, active, cluster ceph, osd.18, block /dev/sdv2, block.db /dev/sdt13, block.wal /dev/sdt14
/dev/sdv2 ceph block, for /dev/sdv1
/dev/sdw :
/dev/sdw1 ceph data, active, cluster ceph, osd.19, block /dev/sdw2, block.db /dev/sdt15, block.wal /dev/sdt16
/dev/sdw2 ceph block, for /dev/sdw1
/dev/sdx :
/dev/sdx1 ceph data, active, cluster ceph, osd.20, block /dev/sdx2, block.db /dev/sdt17, block.wal /dev/sdt18
/dev/sdx2 ceph block, for /dev/sdx1
/dev/sdy :
/dev/sdy1 ceph data, active, cluster ceph, osd.21, block /dev/sdy2, block.db /dev/sdt19, block.wal /dev/sdt20
/dev/sdy2 ceph block, for /dev/sdy1
/dev/sdz :
/dev/sdz1 ceph data, active, cluster ceph, osd.22, block /dev/sdz2, block.db /dev/sdt21, block.wal /dev/sdt22
/dev/sdz2 ceph block, for /dev/sdz1
Are you sure to correctly define the dedicated_devices variable ? because in the ansible log we can see:
ceph-disk prepare --cluster ceph --bluestore --block.db --block.wal /dev/sda
And the block.db and block.wal devices are missing. You should have something like:
ceph-disk prepare --cluster ceph --bluestore --block.db /dev/sds --block.wal /dev/sds /dev/sda
Thank you for the answer, I'm using this configuration for osd.yml:
osd_objectstore: bluestore
osd_scenario: non-collocated
devices:
- /dev/sdya
- /dev/sdyb
- /dev/sdyc
- /dev/sdyd
- /dev/sdye
- /dev/sdyf
- /dev/sdyg
- /dev/sdyh
- /dev/sdyi
- /dev/sdyj
- /dev/sdyk
- /dev/sdyl
- /dev/sdym
- /dev/sdyn
- /dev/sdyo
- /dev/sdyp
- /dev/sdyq
- /dev/sdyr
- /dev/sdys
- /dev/sdyt
- /dev/sdyu
- /dev/sdyv
- /dev/sdyw
- /dev/sdyx
- /dev/sdyy
- /dev/sdyz
- /dev/sdza
- /dev/sdzb
- /dev/sdzc
- /dev/sdzd
- /dev/sdze
- /dev/sdzf
- /dev/sdzg
- /dev/sdzh
- /dev/sdzi
- /dev/sdzj
- /dev/sdzk
- /dev/sdzl
- /dev/sdzm
- /dev/sdzn
- /dev/sdzo
- /dev/sdzp
- /dev/sdzq
dedicated_devices:
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzz
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
- /dev/sdzy
I have discover that the playbook works if I don't use the symbolic links. Why can this happen?
I don't know because it was working fine for me. BTW I just discovered that there's a little difference in your setup and mine with the symlink creation. According to your comment in https://github.com/ceph/ceph-ansible/issues/3650#issuecomment-468574015 you made symlink from sdya -> sda (assuming you were in /dev directory) but you didn't use absolute path to the target device. On my side I was using /dev/sdya -> /dev/sdb as you can see in https://github.com/ceph/ceph-ansible/issues/3650#issuecomment-469452318
Maybe you could give it a try ?
Interesting observation, these are my symlinks:
find /dev/sd* -type l -ls
90356 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdya -> sda
92387 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdyb -> sdb
101485 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdyc -> sdc
117766 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdyd -> sdd
90373 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdye -> sde
38777 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdyf -> sdf
96432 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdyg -> sdg
59230 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdyh -> sdh
109596 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdyi -> sdi
115727 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdyj -> sdj
40937 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdyk -> sdk
106545 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdyl -> sdl
111666 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdym -> sdm
96445 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdyn -> sdn
101470 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdyo -> sdo
40954 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdyp -> sdp
111653 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdyq -> sdq
116742 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdyr -> sdr
103480 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdys -> sdt
88300 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdyt -> sdu
107572 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdyu -> sdv
104511 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdyv -> sdw
111679 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdyw -> sdx
85273 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdyx -> sdy
92416 0 lrwxrwxrwx 1 root root 3 Mar 6 09:35 /dev/sdyy -> sdz
95330 0 lrwxrwxrwx 1 root root 4 Mar 6 09:35 /dev/sdyz -> sdaa
100446 0 lrwxrwxrwx 1 root root 4 Mar 6 09:35 /dev/sdza -> sdab
79627 0 lrwxrwxrwx 1 root root 4 Mar 6 09:35 /dev/sdzb -> sdac
100431 0 lrwxrwxrwx 1 root root 4 Mar 6 09:35 /dev/sdzc -> sdad
87266 0 lrwxrwxrwx 1 root root 4 Mar 6 09:35 /dev/sdzd -> sdae
107587 0 lrwxrwxrwx 1 root root 4 Mar 6 09:35 /dev/sdze -> sdaf
92429 0 lrwxrwxrwx 1 root root 4 Mar 6 09:35 /dev/sdzf -> sdag
114696 0 lrwxrwxrwx 1 root root 4 Mar 6 09:35 /dev/sdzg -> sdah
96460 0 lrwxrwxrwx 1 root root 4 Mar 6 09:35 /dev/sdzh -> sdai
115744 0 lrwxrwxrwx 1 root root 4 Mar 6 09:35 /dev/sdzi -> sdaj
83421 0 lrwxrwxrwx 1 root root 4 Mar 6 09:35 /dev/sdzj -> sdak
101438 0 lrwxrwxrwx 1 root root 4 Mar 6 09:35 /dev/sdzk -> sdal
113672 0 lrwxrwxrwx 1 root root 4 Mar 6 09:35 /dev/sdzl -> sdam
101457 0 lrwxrwxrwx 1 root root 4 Mar 6 09:35 /dev/sdzm -> sdan
98395 0 lrwxrwxrwx 1 root root 4 Mar 6 09:35 /dev/sdzn -> sdao
91282 0 lrwxrwxrwx 1 root root 4 Mar 6 09:35 /dev/sdzo -> sdap
89322 0 lrwxrwxrwx 1 root root 4 Mar 6 09:35 /dev/sdzp -> sdaq
44969 0 lrwxrwxrwx 1 root root 4 Mar 6 09:35 /dev/sdzq -> sdar
I have some udev rules to make this operation:
KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="35000cca270c3c474", SYMLINK+="sdya"
How you have made your symblink?
In another test if I create the symlink with ln
command:
# find /dev/sd* -type l -ls
109300 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdya -> /dev/sda
109301 0 lrwxrwxrwx 1 root root 9 Mar 6 09:57 /dev/sdyb -> /dev/sdaa
109302 0 lrwxrwxrwx 1 root root 9 Mar 6 09:57 /dev/sdyc -> /dev/sdab
109303 0 lrwxrwxrwx 1 root root 9 Mar 6 09:57 /dev/sdyd -> /dev/sdac
109304 0 lrwxrwxrwx 1 root root 9 Mar 6 09:57 /dev/sdye -> /dev/sdad
109305 0 lrwxrwxrwx 1 root root 9 Mar 6 09:57 /dev/sdyf -> /dev/sdae
109306 0 lrwxrwxrwx 1 root root 9 Mar 6 09:57 /dev/sdyg -> /dev/sdaf
109307 0 lrwxrwxrwx 1 root root 9 Mar 6 09:57 /dev/sdyh -> /dev/sdag
109308 0 lrwxrwxrwx 1 root root 9 Mar 6 09:57 /dev/sdyi -> /dev/sdah
109309 0 lrwxrwxrwx 1 root root 9 Mar 6 09:57 /dev/sdyj -> /dev/sdai
109310 0 lrwxrwxrwx 1 root root 9 Mar 6 09:57 /dev/sdyk -> /dev/sdaj
109311 0 lrwxrwxrwx 1 root root 9 Mar 6 09:57 /dev/sdyl -> /dev/sdak
109312 0 lrwxrwxrwx 1 root root 9 Mar 6 09:57 /dev/sdym -> /dev/sdal
109313 0 lrwxrwxrwx 1 root root 9 Mar 6 09:57 /dev/sdyn -> /dev/sdam
109314 0 lrwxrwxrwx 1 root root 9 Mar 6 09:57 /dev/sdyo -> /dev/sdan
109315 0 lrwxrwxrwx 1 root root 9 Mar 6 09:57 /dev/sdyp -> /dev/sdao
109316 0 lrwxrwxrwx 1 root root 9 Mar 6 09:57 /dev/sdyq -> /dev/sdap
109317 0 lrwxrwxrwx 1 root root 9 Mar 6 09:57 /dev/sdyr -> /dev/sdaq
109318 0 lrwxrwxrwx 1 root root 9 Mar 6 09:57 /dev/sdys -> /dev/sdar
109319 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdyt -> /dev/sdb
109320 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdyu -> /dev/sdc
109321 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdyv -> /dev/sdd
109322 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdyw -> /dev/sde
109323 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdyx -> /dev/sdf
109324 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdyy -> /dev/sdg
109325 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdyz -> /dev/sdh
109326 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdza -> /dev/sdi
109327 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdzb -> /dev/sdj
109328 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdzc -> /dev/sdk
109329 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdzd -> /dev/sdl
109330 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdze -> /dev/sdm
109331 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdzf -> /dev/sdn
109332 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdzg -> /dev/sdo
109333 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdzh -> /dev/sdp
109334 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdzi -> /dev/sdq
109335 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdzj -> /dev/sdr
110063 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdzk -> /dev/sdt
110064 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdzl -> /dev/sdu
110065 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdzm -> /dev/sdv
110066 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdzn -> /dev/sdw
110067 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdzo -> /dev/sdx
110068 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdzp -> /dev/sdy
110069 0 lrwxrwxrwx 1 root root 8 Mar 6 09:57 /dev/sdzq -> /dev/sdz
192835 0 lrwxrwxrwx 1 root root 9 Mar 6 13:58 /dev/sdzy -> /dev/sdas
192834 0 lrwxrwxrwx 1 root root 8 Mar 6 13:58 /dev/sdzz -> /dev/sds
It doesn't work, only work when I put the original devices not the symlink.
It's seems that it's a problem of my own infrastructure. I've been able to deploy it without the using of symlinks.
Bug Report
What happened: Trying to deploy a ceph cluster with bluestore and non-collocated options, I have this error:
failed: [osd24] (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/sdaq', u'script': u"unit 'MiB' print", '_ansible_no_log': False, u'changed': False, 'failed': False, ' item': u'/dev/sdaq', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'prim ary', u'flags': None, u'device': u'/dev/sdaq', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/sdaq', u'physical_block': 4096, u'table': u'unknown', u'logical_block': 512, u'model': u'ATA HGST HUH721212AL', u'unit': u'mib', u'size': 11444224.0}, '_ansible_ignore_errors': None, u'partitions': []}, None, None, u'/dev/sdaq']) => {"changed": true, "cmd": ["ceph-disk", "prepare", "--cluster", "ceph", "--bluestore", "--block.db" , "--block.wal", "/dev/sdaq"], "delta": "0:00:00.127761", "end": "2019-02-28 11:33:51.074694", "item": [{"_ansible_ignore_errors": null, "_ansible_item_label": "/dev/sdaq", "_ansible_item_result": true, "_ansibl e_no_log": false, "_ansible_parsed": true, "changed": false, "disk": {"dev": "/dev/sdaq", "logical_block": 512, "model": "ATA HGST HUH721212AL", "physical_block": 4096, "size": 11444224.0, "table": "unknown", "u nit": "mib"}, "failed": false, "invocation": {"module_args": {"align": "optimal", "device": "/dev/sdaq", "flags": null, "label": "msdos", "name": null, "number": null, "part_end": "100%", "part_start": "0%", "pa rt_type": "primary", "state": "info", "unit": "MiB"}}, "item": "/dev/sdaq", "partitions": [], "script": "unit 'MiB' print"}, null, null, "/dev/sdaq"], "msg": "non-zero return code", "rc": 2, "start": "2019-02-28 11:33:50.946933", "stderr": "/usr/lib/python2.7/site-packages/ceph_disk/main.py:5689: UserWarning: \n*******************************************************************************\nThis tool is now deprecated in favor of ceph-volume.\nIt is recommended to use ceph-volume for OSD deployments. For details see:\n\n http://docs.ceph.com/docs/master/ceph-volume/#migrating\n\n******************************************** ***********************************\n\n warnings.warn(DEPRECATION_WARNING)\nusage: ceph-disk prepare [-h] [--cluster NAME] [--cluster-uuid UUID]\n [--osd-uuid UUID] [--osd-id ID]\n [--crush-device-class CRUSH_DEVICE_CLASS] [--dmcrypt]\n [--dmcrypt-key-dir KEYDIR] [--prepare-key PATH]\n [--no-locking] [--fs-type FS_TYPE] [-- zap-disk]\n [--data-dir] [--data-dev] [--lockbox LOCKBOX]\n [--lockbox-uuid UUID] [--journal-uuid UUID]\n [--journal-file] [--journal-dev] [--bluestore]\n [--filestore] [--block-uuid UUID] [--block-file]\n [--block-dev] [--block.db-uuid UUID]\n [--block.db-file] [--block.db-dev ]\n [--block.db BLOCKDB] [--block.wal-uuid UUID]\n [--block.wal-file] [--block.wal-dev]\n [--block.wal BLOCKWAL]\n DATA [JOURNAL] [BLOCK]\nceph-disk prepare: error: argument --block.db: expected one argument", "stderr_lines": ["/usr/lib/python2.7/site-packages/ceph_disk/main.py:5689: UserWarning: ", "************************ *******************************************************", "This tool is now deprecated in favor of ceph-volume.", "It is recommended to use ceph-volume for OSD deployments. For details see:", "", " http://doc s.ceph.com/docs/master/ceph-volume/#migrating", "", "*******************************************************************************", "", " warnings.warn(DEPRECATION_WARNING)", "usage: ceph-disk prepare [-h] [ --cluster NAME] [--cluster-uuid UUID]", " [--osd-uuid UUID] [--osd-id ID]", " [--crush-device-class CRUSH_DEVICE_CLASS] [--dmcrypt]", " [-- dmcrypt-key-dir KEYDIR] [--prepare-key PATH]", " [--no-locking] [--fs-type FS_TYPE] [--zap-disk]", " [--data-dir] [--data-dev] [--lockbox LOCKBOX]", " [--lockbox-uuid UUID] [--journal-uuid UUID]", " [--journal-file] [--journal-dev] [--bluestore]", " [--filestore] [--block-uuid UUID] [--block-file]", " [--block-dev] [--block.db-uuid UUID]", " [--block.db-file] [--block.db-dev]", " [--block.db BLOCKDB] [--block.wal-uuid UUID]", " [--block.wal-file] [--block.wal-dev]", " [--block.wal BLOCKWAL]", " DATA [JOURNAL] [BLOCK]", "ceph-disk prepare: error: argument --block.db: expecte d one argument"], "stdout": "", "stdout_lines": []}
What you expected to happen: A ceph cluster with bluestore using two SSD disk for RockDB and WAL data.
How to reproduce it (minimal and precise):
Environment:
uname -a
): Linux 3.10.0-957.1.3.el7.x86_64 #1 SMP Thu Nov 29 14:49:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linuxdocker version
):ansible-playbook --version
): ansible-playbook 2.6.14git head or tag or stable branch
): stable-3.2ceph -v
): ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e) mimic (stable)