Closed chengtcli closed 7 years ago
Can one of the admins verify this patch?
Can one of the admins verify this patch?
Can one of the admins verify this patch?
tested on tardis by using VMs env: envs/example/ci-ceph-ubuntu
After deployment, rbd_ssd pool has PG number of 256 (3 OSDs) as expected.
ok to test
Will there be nay impacts to existing large ceph cluster on reconverge?
As you know, we use bbg-ceph-utils to adjust PG number instead of update PG number in ceph_pool module. After reconverge existing clusetr, PG number will be increased by bbg-ceph-utils.
According to ceph docs, pgs_per_osd could be 100-300. It was 100 in former releases, we increase it to 200 to get better PG distribution. We had changed the value of pgs_per_osd in ceph-defaults role, we make the ceph_pool module use that parameter in this commit.