ceph / ceph-ansible

Ansible playbooks to deploy Ceph, the distributed filesystem.
Apache License 2.0
1.67k stars 1.01k forks source link

ceph-volume lvm batch --report selects different disks constantly on each run #5412

Closed cagriersen closed 4 years ago

cagriersen commented 4 years ago

Bug Report

What happened: We use ceph-ansible through openstack tripleo to deploy and scale our openstack (stein) cluster as a HCI environment. Our all hosts have 6 same brand/model disk devices that used as OSDs. Currently we have 10 ComputeHCI nodes in our environment and works fine. All 6 SSD disks are used as seperate OSDs disks, so every hosts have 6 osds containers without problem. Since all these disks is SSD, there are no separate device for blocks.db. Here is our ceph custom config snippet:

parameter_defaults:
  CephConfigOverrides:
    mon_max_pg_per_osd: 3072
    journal_size: 5120
    osd_pool_default_size: 2
    osd_pool_default_min_size: 2
    osd_pool_default_pg_num: 128
    osd_pool_default_pgp_num: 128
  CephAnsibleDisksConfig:
    osd_scenario: lvm
    osd_objectstore: bluestore
    devices:
      - /dev/disk/by-path/pci-0000:01:00.0-sas-0x4433221100000000-lun-0
      - /dev/disk/by-path/pci-0000:01:00.0-sas-0x4433221101000000-lun-0
      - /dev/disk/by-path/pci-0000:01:00.0-sas-0x4433221102000000-lun-0
      - /dev/disk/by-path/pci-0000:01:00.0-sas-0x4433221103000000-lun-0
      - /dev/disk/by-path/pci-0000:01:00.0-sas-0x4433221104000000-lun-0
      - /dev/disk/by-path/pci-0000:01:00.0-sas-0x4433221105000000-lun-0

and ceph-volume lvm list output from the one of the current node:

====== osd.12 ======

  [block]       /dev/ceph-7aa01df5-835c-47a3-9d09-9ffbad4fbaa6/osd-data-c49bc47c-673e-4edd-8fab-72e7382367df

      block device              /dev/ceph-7aa01df5-835c-47a3-9d09-9ffbad4fbaa6/osd-data-c49bc47c-673e-4edd-8fab-72e7382367df
      block uuid                zCl6Kj-LhNm-PIDr-ldSy-qp7g-Wfx0-1ZRHCx
      cephx lockbox secret
      cluster fsid              d9012246-3dde-11ea-bdea-005056bb7a74
      cluster name              ceph
      crush device class        None
      encrypted                 0
      osd fsid                  2459ea3c-e6e2-4e1a-8ba4-e34f08a2d0ab
      osd id                    12
      type                      block
      vdo                       0
      devices                   /dev/sde

====== osd.19 ======

  [block]       /dev/ceph-2249909a-889a-4333-b813-5ef70a62cf5f/osd-data-6730f3f7-4dc4-4cdd-83ae-1741dea9483c

      block device              /dev/ceph-2249909a-889a-4333-b813-5ef70a62cf5f/osd-data-6730f3f7-4dc4-4cdd-83ae-1741dea9483c
      block uuid                9nh8dv-KH6c-9XPl-VgDr-Tu33-dc5q-ZUqHmp
      cephx lockbox secret
      cluster fsid              d9012246-3dde-11ea-bdea-005056bb7a74
      cluster name              ceph
      crush device class        None
      encrypted                 0
      osd fsid                  f124e156-114d-426f-9300-f62b52c13deb
      osd id                    19
      type                      block
      vdo                       0
      devices                   /dev/sdd

====== osd.26 ======

  [block]       /dev/ceph-61cf26a6-1812-481c-84e1-677ba23aa58b/osd-data-51e8fffe-c245-4798-bf3d-652278262aa7

      block device              /dev/ceph-61cf26a6-1812-481c-84e1-677ba23aa58b/osd-data-51e8fffe-c245-4798-bf3d-652278262aa7
      block uuid                UERWTw-ARvK-wDY2-W8H2-HWIF-QDjC-HcZw1X
      cephx lockbox secret
      cluster fsid              d9012246-3dde-11ea-bdea-005056bb7a74
      cluster name              ceph
      crush device class        None
      encrypted                 0
      osd fsid                  adbd5cfd-2e90-43e8-a110-2ef2b378faf8
      osd id                    26
      type                      block
      vdo                       0
      devices                   /dev/sdc

====== osd.3 =======

  [block]       /dev/ceph-a51dda92-ec02-48eb-95b1-dbface28e08f/osd-data-81ef56f9-dc57-46ba-94b3-501116637434

      block device              /dev/ceph-a51dda92-ec02-48eb-95b1-dbface28e08f/osd-data-81ef56f9-dc57-46ba-94b3-501116637434
      block uuid                64ea1Y-igtk-Mvxx-ra8o-YomF-VllE-DafIKW
      cephx lockbox secret
      cluster fsid              d9012246-3dde-11ea-bdea-005056bb7a74
      cluster name              ceph
      crush device class        None
      encrypted                 0
      osd fsid                  c8663c84-405f-4b83-a2ba-9b370454318d
      osd id                    3
      type                      block
      vdo                       0
      devices                   /dev/sdf

====== osd.35 ======

  [block]       /dev/ceph-6cb9a989-3076-42fc-9f5d-dfe13c47be8f/osd-data-003168ed-bbe0-4efd-9e0e-e2ac37b3d197

      block device              /dev/ceph-6cb9a989-3076-42fc-9f5d-dfe13c47be8f/osd-data-003168ed-bbe0-4efd-9e0e-e2ac37b3d197
      block uuid                Db1WY2-2D9w-CcGi-dTef-s3cK-p8mw-RsydrN
      cephx lockbox secret
      cluster fsid              d9012246-3dde-11ea-bdea-005056bb7a74
      cluster name              ceph
      crush device class        None
      encrypted                 0
      osd fsid                  37c35e24-c25a-442a-84fc-90b561f6fda8
      osd id                    35
      type                      block
      vdo                       0
      devices                   /dev/sdb

====== osd.43 ======

  [block]       /dev/ceph-68ea2503-3933-4828-bf52-25d4506ab42a/osd-data-9862c6df-79ca-4b4d-a38f-ddef33a552fc

      block device              /dev/ceph-68ea2503-3933-4828-bf52-25d4506ab42a/osd-data-9862c6df-79ca-4b4d-a38f-ddef33a552fc
      block uuid                Oqaupb-WRxi-y92x-0LPv-MFDw-QhhJ-LFRhaz
      cephx lockbox secret
      cluster fsid              d9012246-3dde-11ea-bdea-005056bb7a74
      cluster name              ceph
      crush device class        None
      encrypted                 0
      osd fsid                  7f510c2c-3154-4809-95b6-770dc2084bc2
      osd id                    43
      type                      block
      vdo                       0
      devices                   /dev/sda

As expected there are six OSDs and there are no separate logical volume for dbs.

However, last day we wanted to scale up our HCI nodes, so we added another 10 nodes to the environment. These nodes also have same hw configurations with the others. (Same brand/model server with same amount and brand/model of SSD disks)

After we initiated the re-deployment command, it's ended up with an error on ceph-ansible deployment phase for almost all disks that attached to our all newly added nodes:

stderr: '--> RuntimeError: Unable to use device, already a member of LVM: /dev/sdX'

Here's the task that failed:

2020-06-10 17:08:57,863 p=536281 u=root |  fatal: [computehci-8]: FAILED! => changed=true
  cmd:
  - docker
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - docker.io/ceph/daemon:v4.0.8-stable-4.0-nautilus-centos-7-x86_64
  - --cluster
  - ceph
  - lvm
  - batch
  - --bluestore
  - --yes
  - --prepare
  - /dev/sdb
  - /dev/sdc
  - /dev/sdd
  - /dev/sde
  - /dev/sdg
  - /dev/sdf
  - --report
  - --format=json
  msg: non-zero return code
  rc: 1
  stderr: '-->  RuntimeError: Unable to use device, already a member of LVM: /dev/sdd'
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>

As I tried to understand the issue, I noticed that ceph-volume --cluster ceph lvm batch --bluestore --yes --prepare /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sdf /dev/sde --report command select some of disks to use as create a logical volume for ceph-block-dbs

When I check the ceph-volume lvm list output from a failed host, I see:

====== osd.56 ======

  [block]       /dev/ceph-block-1ae74af3-2b40-4154-b292-8d5b58d9513a/osd-block-acc76dc4-530e-4981-a8d5-753e2bf76f48

      block device              /dev/ceph-block-1ae74af3-2b40-4154-b292-8d5b58d9513a/osd-block-acc76dc4-530e-4981-a8d5-753e2bf76f48
      block uuid                wna3fr-Drg0-KdF7-ghtq-lvdD-0TrT-zv4cuE
      cephx lockbox secret
      cluster fsid              d9012246-3dde-11ea-bdea-005056bb7a74
      cluster name              ceph
      crush device class        None
      db device                 /dev/ceph-block-dbs-9c5c0af1-416f-4904-a994-47ddf416c03e/osd-block-db-665fd157-ce37-481d-92b4-c28f3fe6ae9b
      db uuid                   fOcwah-SnFg-ub7E-jgfP-K05R-FYzM-dBM8UD
      encrypted                 0
      osd fsid                  f6b19cdb-38f7-42fb-b166-0585cc101d73
      osd id                    56
      type                      block
      vdo                       0
      devices                   /dev/sda

  [db]          /dev/ceph-block-dbs-9c5c0af1-416f-4904-a994-47ddf416c03e/osd-block-db-665fd157-ce37-481d-92b4-c28f3fe6ae9b

      block device              /dev/ceph-block-1ae74af3-2b40-4154-b292-8d5b58d9513a/osd-block-acc76dc4-530e-4981-a8d5-753e2bf76f48
      block uuid                wna3fr-Drg0-KdF7-ghtq-lvdD-0TrT-zv4cuE
      cephx lockbox secret
      cluster fsid              d9012246-3dde-11ea-bdea-005056bb7a74
      cluster name              ceph
      crush device class        None
      db device                 /dev/ceph-block-dbs-9c5c0af1-416f-4904-a994-47ddf416c03e/osd-block-db-665fd157-ce37-481d-92b4-c28f3fe6ae9b
      db uuid                   fOcwah-SnFg-ub7E-jgfP-K05R-FYzM-dBM8UD
      encrypted                 0
      osd fsid                  f6b19cdb-38f7-42fb-b166-0585cc101d73
      osd id                    56
      type                      db
      vdo                       0
      devices                   /dev/sdd,/dev/sde

====== osd.67 ======

  [block]       /dev/ceph-block-3745ba3a-f2c2-4664-87db-8bd07537fdcd/osd-block-64da345d-260b-4236-a643-8d02498220d7

      block device              /dev/ceph-block-3745ba3a-f2c2-4664-87db-8bd07537fdcd/osd-block-64da345d-260b-4236-a643-8d02498220d7
      block uuid                nVIZ6I-E6TR-USXB-4imH-ypL2-OFxT-JhYPSH
      cephx lockbox secret
      cluster fsid              d9012246-3dde-11ea-bdea-005056bb7a74
      cluster name              ceph
      crush device class        None
      db device                 /dev/ceph-block-dbs-9c5c0af1-416f-4904-a994-47ddf416c03e/osd-block-db-adac6057-5a27-462e-b1e1-5b2a5c2e5524
      db uuid                   SdG5do-shqq-9DI5-rUhI-7pgk-FHbb-w1rOmQ
      encrypted                 0
      osd fsid                  2080913d-9963-492e-9dd0-1f0c99fedbf2
      osd id                    67
      type                      block
      vdo                       0
      devices                   /dev/sdf

  [db]          /dev/ceph-block-dbs-9c5c0af1-416f-4904-a994-47ddf416c03e/osd-block-db-adac6057-5a27-462e-b1e1-5b2a5c2e5524

      block device              /dev/ceph-block-3745ba3a-f2c2-4664-87db-8bd07537fdcd/osd-block-64da345d-260b-4236-a643-8d02498220d7
      block uuid                nVIZ6I-E6TR-USXB-4imH-ypL2-OFxT-JhYPSH
      cephx lockbox secret
      cluster fsid              d9012246-3dde-11ea-bdea-005056bb7a74
      cluster name              ceph
      crush device class        None
      db device                 /dev/ceph-block-dbs-9c5c0af1-416f-4904-a994-47ddf416c03e/osd-block-db-adac6057-5a27-462e-b1e1-5b2a5c2e5524
      db uuid                   SdG5do-shqq-9DI5-rUhI-7pgk-FHbb-w1rOmQ
      encrypted                 0
      osd fsid                  2080913d-9963-492e-9dd0-1f0c99fedbf2
      osd id                    67
      type                      db
      vdo                       0
      devices                   /dev/sdb,/dev/sdc

Also lsblk and lvscan command outputs from the same host say :

[root@computehci-19 ~]# lsblk
NAME                                                                                                                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                                                                                                     8:0    0  1.1T  0 disk
└─ceph--block--1ae74af3--2b40--4154--b292--8d5b58d9513a-osd--block--acc76dc4--530e--4981--a8d5--753e2bf76f48          253:0    0  1.1T  0 lvm
sdb                                                                                                                     8:16   0  1.1T  0 disk
└─ceph--block--dbs--9c5c0af1--416f--4904--a994--47ddf416c03e-osd--block--db--adac6057--5a27--462e--b1e1--5b2a5c2e5524 253:3    0  2.2T  0 lvm
sdc                                                                                                                     8:32   0  1.1T  0 disk
└─ceph--block--dbs--9c5c0af1--416f--4904--a994--47ddf416c03e-osd--block--db--adac6057--5a27--462e--b1e1--5b2a5c2e5524 253:3    0  2.2T  0 lvm
sdd                                                                                                                     8:48   0  1.1T  0 disk
└─ceph--block--dbs--9c5c0af1--416f--4904--a994--47ddf416c03e-osd--block--db--665fd157--ce37--481d--92b4--c28f3fe6ae9b 253:1    0  2.2T  0 lvm
sde                                                                                                                     8:64   0  1.1T  0 disk
└─ceph--block--dbs--9c5c0af1--416f--4904--a994--47ddf416c03e-osd--block--db--665fd157--ce37--481d--92b4--c28f3fe6ae9b 253:1    0  2.2T  0 lvm
sdf                                                                                                                     8:80   0  1.1T  0 disk
└─ceph--block--3745ba3a--f2c2--4664--87db--8bd07537fdcd-osd--block--64da345d--260b--4236--a643--8d02498220d7          253:2    0  1.1T  0 lvm
sdg                                                                                                                     8:96   0 59.6G  0 disk
├─sdg1                                                                                                                  8:97   0    1M  0 part
└─sdg2                                                                                                                  8:98   0 59.6G  0 part /
[root@computehci-19 ~]# lvscan
  ACTIVE            '/dev/ceph-block-dbs-9c5c0af1-416f-4904-a994-47ddf416c03e/osd-block-db-665fd157-ce37-481d-92b4-c28f3fe6ae9b' [2.18 TiB] inherit
  ACTIVE            '/dev/ceph-block-dbs-9c5c0af1-416f-4904-a994-47ddf416c03e/osd-block-db-adac6057-5a27-462e-b1e1-5b2a5c2e5524' [2.18 TiB] inherit
  ACTIVE            '/dev/ceph-block-3745ba3a-f2c2-4664-87db-8bd07537fdcd/osd-block-64da345d-260b-4236-a643-8d02498220d7' [1.09 TiB] inherit
  ACTIVE            '/dev/ceph-block-1ae74af3-2b40-4154-b292-8d5b58d9513a/osd-block-acc76dc4-530e-4981-a8d5-753e2bf76f48' [1.09 TiB] inherit

It seems, ceph-volume has selected some of the disks to create a logical volume to place blocks.db incorrectly. Since all disks are same model SSD disks, it should use all of them for separate osds without any separate dbs.

After I seen this weird situation, I've login to one of the faulty node and wiped all disks (including lvm metadata) and ran the same command as ceph-ansible did.

So I started a container from the same ceph docker image:

docker run -it --privileged --net=host --ipc=host --ulimit nofile=1024:4096 \
-v /run/lock/lvm:/run/lock/lvm:z \
-v /var/run/udev/:/var/run/udev/:z \
-v /dev:/dev \
-v /etc/ceph:/etc/ceph:z \
-v /run/lvm/:/run/lvm/ \
-v /var/lib/ceph/:/var/lib/ceph/:z \
-v /var/log/ceph/:/var/log/ceph/:z \
--entrypoint=/bin/bash \
docker.io/ceph/daemon:v4.0.8-stable-4.0-nautilus-centos-7-x86_64 

And run:

ceph-volume --cluster ceph lvm batch --bluestore --yes --prepare /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sdf /dev/sde --report

Even more oddly, this command returns different outputs for each runs like below:

#### FIRST TRY ####
[root@computehci-10 /]# time ceph-volume --cluster ceph lvm batch --bluestore --yes --prepare /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sdf /dev/sde --report

Total OSDs: 3

Solid State VG:
  Targets:   block.db                  Total size: 3.27 TB
  Total LVs: 3                         Size per LV: 1.09 TB
  Devices:   /dev/sdb, /dev/sdc, /dev/sdd

  Type            Path                                                    LV Size         % of device
----------------------------------------------------------------------------------------------------
  [data]          /dev/sda                                                1.09 TB         100%
  [block.db]      vg: vg/lv                                               1.09 TB         33%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdf                                                1.09 TB         100%
  [block.db]      vg: vg/lv                                               1.09 TB         33%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sde                                                1.09 TB         100%
  [block.db]      vg: vg/lv                                               1.09 TB         33%

real    0m0.570s
user    0m0.161s
sys 0m0.201s

#### SECOND TRY ####
[root@computehci-10 /]# time ceph-volume --cluster ceph lvm batch --bluestore --yes --prepare /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sdf /dev/sde --report

Total OSDs: 2

Solid State VG:
  Targets:   block.db                  Total size: 4.36 TB
  Total LVs: 2                         Size per LV: 2.18 TB
  Devices:   /dev/sdb, /dev/sdc, /dev/sdd, /dev/sdf

  Type            Path                                                    LV Size         % of device
----------------------------------------------------------------------------------------------------
  [data]          /dev/sda                                                1.09 TB         100%
  [block.db]      vg: vg/lv                                               2.18 TB         50%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sde                                                1.09 TB         100%
  [block.db]      vg: vg/lv                                               2.18 TB         50%

real    0m0.591s
user    0m0.173s
sys 0m0.185s

As you see, at the first try it decided to create 3 OSDs, and when I run the same command again, this time it decided to create 2 OSDs

Even more, because of this deployment situation we ended up with aHEALTH_WARN state Ceph Cluster since it added newly created OSDs to the cluster which made it as an inconsistent cluster.

The only difference is we use CentOS 7.8.2003 on faulty nodes, while our existing nodes is CentOS 7.7.1908. The issue might related to this differiancy; though I'll try to down grade CentOS version on faulty nodes.

What you expected to happen:

It should create OSDs with the correct decision which should pick one disk for one OSD with its db data.

How to reproduce it (minimal and precise):

Share your group_vars files, inventory and full ceph-ansibe log

The full output of ceph-volume's lvcreate action from /var/lib/mistral/overcloud/ceph-ansible/ceph_ansible_command.log

2020-06-10 17:01:13,623 p=536281 u=root |  changed: [computehci-13] => changed=true
  cmd:
  - docker
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - docker.io/ceph/daemon:v4.0.8-stable-4.0-nautilus-centos-7-x86_64
  - --cluster
  - ceph
--
    Running command: /usr/sbin/lvcreate --yes -l 1117 -n osd-block-20e8bd53-899c-440c-9422-a61beb834899 ceph-block-d7995e7a-e059-4b25-8b66-f4868316c3c8
     stdout: Wiping linux_raid_member signature on /dev/ceph-block-d7995e7a-e059-4b25-8b66-f4868316c3c8/osd-block-20e8bd53-899c-440c-9422-a61beb834899.
     stdout: Logical volume "osd-block-20e8bd53-899c-440c-9422-a61beb834899" created.
    Running command: /usr/sbin/lvcreate --yes -l 558 -n osd-block-db-9be955b7-2e52-4ca6-894d-9bf4ecab97ae ceph-block-dbs-f0b4d412-28b3-46d1-976d-0494893eb775
     stdout: Logical volume "osd-block-db-9be955b7-2e52-4ca6-894d-9bf4ecab97ae" created.
    Running command: /bin/ceph-authtool --gen-print-key
    Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 1ee23414-741d-44cd-a4c8-f072bb7065dd
    Running command: /bin/ceph-authtool --gen-print-key
    Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-54
    Running command: /bin/chown -h ceph:ceph /dev/ceph-block-d7995e7a-e059-4b25-8b66-f4868316c3c8/osd-block-20e8bd53-899c-440c-9422-a61beb834899
    Running command: /bin/chown -R ceph:ceph /dev/mapper/ceph--block--d7995e7a--e059--4b25--8b66--f4868316c3c8-osd--block--20e8bd53--899c--440c--9422--a61beb834899
    Running command: /bin/ln -s /dev/ceph-block-d7995e7a-e059-4b25-8b66-f4868316c3c8/osd-block-20e8bd53-899c-440c-9422-a61beb834899 /var/lib/ceph/osd/ceph-54/block
    Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-54/activate.monmap
     stderr: got monmap epoch 1
    Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-54/keyring --create-keyring --name osd.54 --add-key *******
     stdout: creating /var/lib/ceph/osd/ceph-54/keyring
    added entity osd.54 auth(key=*******)
    Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-54/keyring
    Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-54/
    Running command: /bin/chown -h ceph:ceph /dev/ceph-block-dbs-f0b4d412-28b3-46d1-976d-0494893eb775/osd-block-db-9be955b7-2e52-4ca6-894d-9bf4ecab97ae
    Running command: /bin/chown -R ceph:ceph /dev/mapper/ceph--block--dbs--f0b4d412--28b3--46d1--976d--0494893eb775-osd--block--db--9be955b7--2e52--4ca6--894d--9bf4ecab97ae
    Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 54 --monmap /var/lib/ceph/osd/ceph-54/activate.monmap --keyfile - --bluestore-block-db-path /dev/ceph-block-dbs-f0b4d412-28b3-46d1-976d-0494893eb775/osd-block-db-9be955b7-2e52-4ca6-894d-9bf4ecab97ae --osd-data /var/lib/ceph/osd/ceph-54/ --osd-uuid 1ee23414-741d-44cd-a4c8-f072bb7065dd --setuser ceph --setgroup ceph
    --> ceph-volume lvm prepare successful for: ceph-block-d7995e7a-e059-4b25-8b66-f4868316c3c8/osd-block-20e8bd53-899c-440c-9422-a61beb834899
    Running command: /usr/sbin/lvcreate --yes -l 1117 -n osd-block-ba463595-edc5-43bb-910b-6b9f71655923 ceph-block-484c50b7-bb91-46bf-9b36-975c07e6274d
     stdout: Wiping linux_raid_member signature on /dev/ceph-block-484c50b7-bb91-46bf-9b36-975c07e6274d/osd-block-ba463595-edc5-43bb-910b-6b9f71655923.
     stdout: Logical volume "osd-block-ba463595-edc5-43bb-910b-6b9f71655923" created.
    Running command: /usr/sbin/lvcreate --yes -l 558 -n osd-block-db-6a74812b-6db5-4e30-858e-1ac3969bf79a ceph-block-dbs-f0b4d412-28b3-46d1-976d-0494893eb775
     stdout: Logical volume "osd-block-db-6a74812b-6db5-4e30-858e-1ac3969bf79a" created.
    Running command: /bin/ceph-authtool --gen-print-key
    Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 48a7c490-c10b-4b7f-a323-06b6b4ed3086
    Running command: /bin/ceph-authtool --gen-print-key
    Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-65
    Running command: /bin/chown -h ceph:ceph /dev/ceph-block-484c50b7-bb91-46bf-9b36-975c07e6274d/osd-block-ba463595-edc5-43bb-910b-6b9f71655923
    Running command: /bin/chown -R ceph:ceph /dev/mapper/ceph--block--484c50b7--bb91--46bf--9b36--975c07e6274d-osd--block--ba463595--edc5--43bb--910b--6b9f71655923
    Running command: /bin/ln -s /dev/ceph-block-484c50b7-bb91-46bf-9b36-975c07e6274d/osd-block-ba463595-edc5-43bb-910b-6b9f71655923 /var/lib/ceph/osd/ceph-65/block
    Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-65/activate.monmap
     stderr: got monmap epoch 1
    Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-65/keyring --create-keyring --name osd.65 --add-key *******
     stdout: creating /var/lib/ceph/osd/ceph-65/keyring
    added entity osd.65 auth(key=*******)
    Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-65/keyring
    Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-65/
    Running command: /bin/chown -h ceph:ceph /dev/ceph-block-dbs-f0b4d412-28b3-46d1-976d-0494893eb775/osd-block-db-6a74812b-6db5-4e30-858e-1ac3969bf79a
    Running command: /bin/chown -R ceph:ceph /dev/mapper/ceph--block--dbs--f0b4d412--28b3--46d1--976d--0494893eb775-osd--block--db--6a74812b--6db5--4e30--858e--1ac3969bf79a
    Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 65 --monmap /var/lib/ceph/osd/ceph-65/activate.monmap --keyfile - --bluestore-block-db-path /dev/ceph-block-dbs-f0b4d412-28b3-46d1-976d-0494893eb775/osd-block-db-6a74812b-6db5-4e30-858e-1ac3969bf79a --osd-data /var/lib/ceph/osd/ceph-65/ --osd-uuid 48a7c490-c10b-4b7f-a323-06b6b4ed3086 --setuser ceph --setgroup ceph
    --> ceph-volume lvm prepare successful for: ceph-block-484c50b7-bb91-46bf-9b36-975c07e6274d/osd-block-ba463595-edc5-43bb-910b-6b9f71655923
    Running command: /usr/sbin/lvcreate --yes -l 1117 -n osd-block-6551196d-d370-48e2-87e2-66ecc95088dc ceph-block-41d3a413-6394-474c-b81f-327daf277a8e
     stdout: Logical volume "osd-block-6551196d-d370-48e2-87e2-66ecc95088dc" created.
    Running command: /usr/sbin/lvcreate --yes -l 558 -n osd-block-db-8df138cd-3462-4214-a2e2-8f00b61c7bb4 ceph-block-dbs-f0b4d412-28b3-46d1-976d-0494893eb775
     stdout: Logical volume "osd-block-db-8df138cd-3462-4214-a2e2-8f00b61c7bb4" created.
    Running command: /bin/ceph-authtool --gen-print-key
    Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b66f1c71-e379-48c6-84d7-0491ddf14d7c
    Running command: /bin/ceph-authtool --gen-print-key
    Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-71
    Running command: /bin/chown -h ceph:ceph /dev/ceph-block-41d3a413-6394-474c-b81f-327daf277a8e/osd-block-6551196d-d370-48e2-87e2-66ecc95088dc
    Running command: /bin/chown -R ceph:ceph /dev/mapper/ceph--block--41d3a413--6394--474c--b81f--327daf277a8e-osd--block--6551196d--d370--48e2--87e2--66ecc95088dc
    Running command: /bin/ln -s /dev/ceph-block-41d3a413-6394-474c-b81f-327daf277a8e/osd-block-6551196d-d370-48e2-87e2-66ecc95088dc /var/lib/ceph/osd/ceph-71/block
    Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-71/activate.monmap
     stderr: got monmap epoch 1
    Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-71/keyring --create-keyring --name osd.71 --add-key *******
     stdout: creating /var/lib/ceph/osd/ceph-71/keyring
    added entity osd.71 auth(key=*******)
    Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-71/keyring
    Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-71/
    Running command: /bin/chown -h ceph:ceph /dev/ceph-block-dbs-f0b4d412-28b3-46d1-976d-0494893eb775/osd-block-db-8df138cd-3462-4214-a2e2-8f00b61c7bb4
    Running command: /bin/chown -R ceph:ceph /dev/mapper/ceph--block--dbs--f0b4d412--28b3--46d1--976d--0494893eb775-osd--block--db--8df138cd--3462--4214--a2e2--8f00b61c7bb4
    Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 71 --monmap /var/lib/ceph/osd/ceph-71/activate.monmap --keyfile - --bluestore-block-db-path /dev/ceph-block-dbs-f0b4d412-28b3-46d1-976d-0494893eb775/osd-block-db-8df138cd-3462-4214-a2e2-8f00b61c7bb4 --osd-data /var/lib/ceph/osd/ceph-71/ --osd-uuid b66f1c71-e379-48c6-84d7-0491ddf14d7c --setuser ceph --setgroup ceph
    --> ceph-volume lvm prepare successful for: ceph-block-41d3a413-6394-474c-b81f-327daf277a8e/osd-block-6551196d-d370-48e2-87e2-66ecc95088dc
    Running command: /usr/sbin/lvcreate --yes -l 1117 -n osd-block-dce7284a-7fda-4eae-bdef-990e9f8105f0 ceph-block-62b49aed-2ee8-4b81-a8e9-b533bb98a1fb
     stdout: Logical volume "osd-block-dce7284a-7fda-4eae-bdef-990e9f8105f0" created.
    Running command: /usr/sbin/lvcreate --yes -l 558 -n osd-block-db-70e01d65-1edd-4e14-a698-c1dd991376fe ceph-block-dbs-f0b4d412-28b3-46d1-976d-0494893eb775
     stdout: Logical volume "osd-block-db-70e01d65-1edd-4e14-a698-c1dd991376fe" created.
    Running command: /bin/ceph-authtool --gen-print-key
    Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 23ecf271-028a-4af2-a97c-af20f1972b40
    Running command: /bin/ceph-authtool --gen-print-key
    Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-74
    Running command: /bin/chown -h ceph:ceph /dev/ceph-block-62b49aed-2ee8-4b81-a8e9-b533bb98a1fb/osd-block-dce7284a-7fda-4eae-bdef-990e9f8105f0
    Running command: /bin/chown -R ceph:ceph /dev/mapper/ceph--block--62b49aed--2ee8--4b81--a8e9--b533bb98a1fb-osd--block--dce7284a--7fda--4eae--bdef--990e9f8105f0
    Running command: /bin/ln -s /dev/ceph-block-62b49aed-2ee8-4b81-a8e9-b533bb98a1fb/osd-block-dce7284a-7fda-4eae-bdef-990e9f8105f0 /var/lib/ceph/osd/ceph-74/block
    Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-74/activate.monmap
     stderr: got monmap epoch 1
    Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-74/keyring --create-keyring --name osd.74 --add-key *******
     stdout: creating /var/lib/ceph/osd/ceph-74/keyring
    added entity osd.74 auth(key=*******)
    Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-74/keyring
    Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-74/
    Running command: /bin/chown -h ceph:ceph /dev/ceph-block-dbs-f0b4d412-28b3-46d1-976d-0494893eb775/osd-block-db-70e01d65-1edd-4e14-a698-c1dd991376fe
    Running command: /bin/chown -R ceph:ceph /dev/mapper/ceph--block--dbs--f0b4d412--28b3--46d1--976d--0494893eb775-osd--block--db--70e01d65--1edd--4e14--a698--c1dd991376fe
    Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 74 --monmap /var/lib/ceph/osd/ceph-74/activate.monmap --keyfile - --bluestore-block-db-path /dev/ceph-block-dbs-f0b4d412-28b3-46d1-976d-0494893eb775/osd-block-db-70e01d65-1edd-4e14-a698-c1dd991376fe --osd-data /var/lib/ceph/osd/ceph-74/ --osd-uuid 23ecf271-028a-4af2-a97c-af20f1972b40 --setuser ceph --setgroup ceph
    --> ceph-volume lvm prepare successful for: ceph-block-62b49aed-2ee8-4b81-a8e9-b533bb98a1fb/osd-block-dce7284a-7fda-4eae-bdef-990e9f8105f0
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>

Also I see, some python exception in /var/log/ceph/ceph-volume.log on faulty nodes:

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/ceph_volume/util/prepare.py", line 88, in get_block_db_size
    conf_db_size = conf.ceph.get_safe('osd', 'bluestore_block_db_size', None)
  File "/usr/lib/python2.7/site-packages/ceph_volume/__init__.py", line 15, in __getattr__
    raise RuntimeError("No valid ceph configuration file was loaded.")
RuntimeError: No valid ceph configuration file was loaded.
[2020-06-10 13:59:28,805][ceph_volume.util.prepare][DEBUG ] block.db has no size configuration, will fallback to using as much as possible
[2020-06-10 13:59:28,806][ceph_volume.util.prepare][ERROR ] failed to load ceph configuration, will use defaults
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/ceph_volume/util/prepare.py", line 122, in get_block_wal_size
    conf_wal_size = conf.ceph.get_safe('osd', 'bluestore_block_wal_size', None)
  File "/usr/lib/python2.7/site-packages/ceph_volume/__init__.py", line 15, in __getattr__
    raise RuntimeError("No valid ceph configuration file was loaded.")
RuntimeError: No valid ceph configuration file was loaded.
[2020-06-10 13:59:28,806][ceph_volume.util.prepare][DEBUG ] block.wal has no size configuration, will fallback to using as much as possible

###########

[2020-06-10 13:59:28,361][ceph_volume.main][INFO  ] Running command: ceph-volume --cluster ceph lvm batch --bluestore --yes --prepare /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sdf /dev/sde --report --format=json
[2020-06-10 13:59:28,362][ceph_volume.main][ERROR ] ignoring inability to load ceph.conf
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/ceph_volume/main.py", line 141, in main
    conf.ceph = configuration.load(conf.path)
  File "/usr/lib/python2.7/site-packages/ceph_volume/configuration.py", line 51, in load
    raise exceptions.ConfigurationError(abspath=abspath)

#########
[2020-06-10 14:08:56,765][ceph_volume.devices.lvm.batch][INFO  ] Ignoring devices already used by ceph: /dev/sda
[2020-06-10 14:08:56,766][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/ceph_volume/decorators.py", line 59, in newfunc
    return f(*a, **kw)
  File "/usr/lib/python2.7/site-packages/ceph_volume/main.py", line 149, in main
    terminal.dispatch(self.mapper, subcommand_args)
  File "/usr/lib/python2.7/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
  File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/main.py", line 40, in main
    terminal.dispatch(self.mapper, self.argv)
  File "/usr/lib/python2.7/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
  File "/usr/lib/python2.7/site-packages/ceph_volume/decorators.py", line 16, in is_root
    return func(*a, **kw)
  File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/batch.py", line 320, in main
    self._get_strategy()
  File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/batch.py", line 303, in _get_strategy
    self.strategy = strategy.with_auto_devices(self.args, unused_devices)
  File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/strategies/bluestore.py", line 25, in with_auto_devices
    return cls(args, devices)
  File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/strategies/bluestore.py", line 20, in __init__
    self.validate_compute()
  File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/strategies/strategies.py", line 30, in validate_compute
    self.validate()
  File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/strategies/bluestore.py", line 65, in validate
    validators.no_lvm_membership(self.data_devs)
  File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/strategies/validators.py", line 38, in no_lvm_membership
    raise RuntimeError(msg % device.abspath)
RuntimeError: Unable to use device, already a member of LVM: /dev/sdb
[2020-06-10 17:50:33,097][ceph_volume.main][INFO  ] Running command: ceph-volume  lvm list

Environment:

cagriersen commented 4 years ago

Additional progresss:

If I use CentOS Linux release 7.7.1908 (Core) with kernel 3.10.0-1062.el7.x86_64 the command works fine:

[root@cagri-test /]# ceph-volume --cluster ceph lvm batch --bluestore --yes --prepare /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg --report

Total OSDs: 6

  Type            Path                                                    LV Size         % of device
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdb                                                99.00 GB        100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdc                                                99.00 GB        100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdd                                                99.00 GB        100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sde                                                99.00 GB        100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdf                                                99.00 GB        100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdg                                                99.00 GB        100%
[root@cagri-test /]# ceph-volume --cluster ceph lvm batch --bluestore --yes --prepare /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg --report

Total OSDs: 6

  Type            Path                                                    LV Size         % of device
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdb                                                99.00 GB        100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdc                                                99.00 GB        100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdd                                                99.00 GB        100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sde                                                99.00 GB        100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdf                                                99.00 GB        100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdg                                                99.00 GB        100%
[root@cagri-test /]# ceph-volume --cluster ceph lvm batch --bluestore --yes --prepare /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg --report

Total OSDs: 6

  Type            Path                                                    LV Size         % of device
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdb                                                99.00 GB        100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdc                                                99.00 GB        100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdd                                                99.00 GB        100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sde                                                99.00 GB        100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdf                                                99.00 GB        100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdg                                                99.00 GB        100%
dsavineau commented 4 years ago

It seems, ceph-volume has selected some of the disks to create a logical volume to place blocks.db incorrectly. Since all disks are same model SSD disks, it should use all of them for separate osds without any separate dbs.

This situation should only happen when there's a mix of HDD with SSD/NVMe devices when using ceph-volume batch otherwise this is a ceph-volume issue. Ceph-volume will try to create the Bluestore data on HDD and Bluestore DB on SSD/NVMe.

If you only have only SSD devices then CentOS 7.7 and 7.8 are probably reporting the rotational device flag differently.

Could you run:

Whould it also be possible to test with a more recent ceph container image (like v4.0.12 which is rebase on ceph nautilus 14.2.9) ?

cagriersen commented 4 years ago

Though I'm sure that all disks are SSD and reported correctly by the OS; here is the outputs:

[root@computehci-10 /]# ceph-volume inventory

Device Path               Size         rotates available Model name
/dev/sdb                  1.09 TB      False   True      INTEL SSDSC2BX01
/dev/sdc                  1.09 TB      False   True      INTEL SSDSC2BX01
/dev/sdd                  1.09 TB      False   True      INTEL SSDSC2BX01
/dev/sde                  1.09 TB      False   True      INTEL SSDSC2BX01
/dev/sdf                  1.09 TB      False   True      INTEL SSDSC2BX01
/dev/sdg                  1.09 TB      False   True      INTEL SSDSC2BX01
/dev/sda                  59.63 GB     False   False     SATADOM-SL 3IE3

[root@computehci-10 /]# for i in {a..f}; do cat /sys/block/sd$i/queue/rotational ; done;
0
0
0
0
0
0

[root@computehci-10 /]# lsblk -o NAME,ROTA
NAME   ROTA
sda       0
|-sda1    0
`-sda2    0
sdb       0
sdc       0
sdd       0
sde       0
sdf       0
sdg       0

But newer version of ceph daemon (v4.0.12-stable-4.0-nautilus-centos-7) image works fine:

[root@computehci-10 /]# time ceph-volume --cluster ceph lvm batch --bluestore --yes --prepare /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sdf /dev/sde --report

Total OSDs: 6

  Type            Path                                                    LV Size         % of device
----------------------------------------------------------------------------------------------------
  [data]          /dev/sda                                                58.00 GB        100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdb                                                1.09 TB         100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdc                                                1.09 TB         100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdd                                                1.09 TB         100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdf                                                1.09 TB         100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sde                                                1.09 TB         100%

real    0m1.743s
user    0m0.889s
sys 0m0.483s
[root@computehci-10 /]# time ceph-volume --cluster ceph lvm batch --bluestore --yes --prepare /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sdf /dev/sde --report

Total OSDs: 6

  Type            Path                                                    LV Size         % of device
----------------------------------------------------------------------------------------------------
  [data]          /dev/sda                                                58.00 GB        100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdb                                                1.09 TB         100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdc                                                1.09 TB         100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdd                                                1.09 TB         100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sdf                                                1.09 TB         100%
----------------------------------------------------------------------------------------------------
  [data]          /dev/sde                                                1.09 TB         100%

real    0m1.707s
user    0m0.937s
sys 0m0.472s

However we don't have a change the update the image to this version manually, since we have a running openstack cluster and the version is managed by the tripleo.

Do you have any idea to offer that we fix this without openstack upgrade ?

dsavineau commented 4 years ago

Do you have any idea to offer that we fix this without openstack upgrade ?

No there's nothing on the ceph-ansible side since it seems to come from ceph-volume itself and the only solution is to update to the latest version.

cagriersen-omma commented 4 years ago

I've resolved issue by using old version CentOS generic cloud image. You can close the issue. Thx for your attention.

minkimipt commented 4 years ago

@cagriersen-omma we've hit this issue too. Can you please elaborate, what was that old CentOS generic cloud image? What release was it? What kernel? Thank you!

cagri-rf commented 4 years ago

Hi @minkimipt

Please read the bug report and my following comment. I just mentioned the versions.

mgariepy commented 4 years ago

@minkimipt @cagri-rf @cagriersen-omma it seems like a race condition on the ceph-volume side, see the issue [1]

just linking it here to make the issue a little more easy to find.

[1] https://tracker.ceph.com/issues/47502

minkimipt commented 4 years ago

@mgariepy thanks for sharing. We were able to work around this by making ceph-volume believe that all disks are rotational. This helps to know that it was indeed a bug in ceph-volume.

foysalkayum commented 3 years ago

@minkimipt how you manage to believe ceph-volume that disks are rotational?

minkimipt commented 3 years ago

Hi @foysalkayum,

We've rebuilt the container with the following script. Hope it's clear what's in it. Basically we replaced one line in file "ceph_volume/util/device.py" inside rhceph container image. Here's the script, which you will need to adapt:

#/bin/bash
# dst_image = "172.18.2.1:8787/rhceph/rhceph-3-rhel7:3-42"
# script needs to run as root user on undercloud VM
file=/var/lib/contrail_cloud/openstack-containers/contrail_cloud-openstack-containers.tgz
# this will take some time because it's loading container images into the locar registry of undercloud
docker load -i $file
export dst_image=$1
src_image=$(docker images --format "{{ .Repository }}:{{ .Tag }}" | grep ceph)
docker run $src_image
# assumption is that there are not other exited ceph containers on the node
container=$(docker ps -a  | grep Exited | grep ceph | awk '{print $1}')
docker cp $container:/usr/lib/python2.7/site-packages/ceph_volume/util/device.py ./
# replacing return of rotational function to False, making backup file device.py.bak
sed -i.bak "s/        return rotational.*/        return False/g" device.py
# creating Dockerfile to build the image
cat << EOF > Dockerfile
FROM $src_image
COPY device.py /usr/lib/python2.7/site-packages/ceph_volume/util/device.py
EOF
# building the image, tagging it with the tag of current image and pushing to the registry
docker build -t $dst_image .
docker push $dst_image
docker rm $container