blueboxgroup / ursula

Ansible playbooks for operating OpenStack - Powering Blue Box Cloud.
https://www.blueboxcloud.com
204 stars 5 forks source link

update the pg per osd #2867

Closed chengtcli closed 7 years ago

chengtcli commented 7 years ago

According to ceph docs, pgs_per_osd could be 100-300. It was 100 in former releases, we increase it to 200 to get better PG distribution. We had changed the value of pgs_per_osd in ceph-defaults role, we make the ceph_pool module use that parameter in this commit.

bbc-jenkins commented 7 years ago

Can one of the admins verify this patch?

bbc-jenkins commented 7 years ago

Can one of the admins verify this patch?

bbc-jenkins commented 7 years ago

Can one of the admins verify this patch?

chengtcli commented 7 years ago

tested on tardis by using VMs env: envs/example/ci-ceph-ubuntu

After deployment, rbd_ssd pool has PG number of 256 (3 OSDs) as expected.

nirajdp76 commented 7 years ago

ok to test

nirajdp76 commented 7 years ago

Will there be nay impacts to existing large ceph cluster on reconverge?

chengtcli commented 7 years ago

As you know, we use bbg-ceph-utils to adjust PG number instead of update PG number in ceph_pool module. After reconverge existing clusetr, PG number will be increased by bbg-ceph-utils.