osism / issues

This repository is used for bug reports that are cross-project or not bound to a specific repository (or to an unknown repository).
https://www.osism.tech
1 stars 1 forks source link

testbed ceph deployment has two `client.rgw.testbed-node-0.rgw0` sections in `/etc/ceph/ceph.conf` #831

Open yeoldegrove opened 8 months ago

yeoldegrove commented 8 months ago

When deploying the testbed https://github.com/osism/testbed/commit/519454a99f23c645260b2eaa633286eedf46ab58 I end up with an invalid /etc/ceph/ceph.conf.

The client.rgw.testbed-node-0.rgw0 section is there twice.

[global]
mon initial members = testbed-node-0,testbed-node-1,testbed-node-2
osd pool default crush rule = -1
fsid = 11111111-1111-1111-1111-111111111111
mon host = [v2:192.168.16.10:3300,v1:192.168.16.10:6789],[v2:192.168.16.11:3300,v1:192.168.16.11:6789],[v2:192.168.16.12:3300,v1:192.168.16.12:6789]
public network = 192.168.16.0/20
cluster network = 192.168.16.0/20
auth allow insecure global id reclaim = False
osd pool default size = 2
osd pool default min size = 0

[osd]
osd memory target = 5888907673

[client.rgw.testbed-node-0.rgw0]
host = testbed-node-0
keyring = /var/lib/ceph/radosgw/ceph-rgw.testbed-node-0.rgw0/keyring
log file = /var/log/ceph/ceph-rgw-testbed-node-0.rgw0.log
rgw frontends = beast endpoint=192.168.16.10:8081
rgw thread pool size = 512

[mon]
mon allow pool delete = True

[client.rgw.testbed-node-0.rgw0]
rgw content length compat = true
rgw enable apis = swift, s3, admin
rgw keystone accepted admin roles = admin
rgw keystone accepted roles = member, admin
rgw keystone admin domain = default
rgw keystone admin password = foobar
rgw keystone admin project = service
rgw keystone admin tenant = service
rgw keystone admin user = ceph_rgw
rgw keystone api version = 3
rgw keystone implicit tenants = true
rgw keystone url = https://api-int.testbed.osism.xyz:5000
rgw keystone verify ssl = false
rgw s3 auth use keystone = true
rgw swift account in url = true
rgw swift versioning enabled = true
rgw verify ssl = false
berendt commented 8 months ago

Are you sure it is invalid? At least the RGW Ceph works in the testbed and is usable? But I agree with you that there should be only one section.

berendt commented 8 months ago

It also looks like this in our production and the RGW service can also be used there (via S3 and via Swift integration with Keystone).

yeoldegrove commented 8 months ago

A ceph config assimilate-conf -i /etc/ceph/ceph.conf taken from this howto complained about it https://docs.ceph.com/en/quincy/cephadm/adoption/.

berendt commented 8 months ago

Ok. Then I'll see where I can fix this in ceph-ansible so that we have a clean ceph.conf.

brueggemann commented 8 months ago

If this also happens in production environments, I think we will need to consider that problem in the migration.

berendt commented 8 months ago

If I find the problem before 20 March, we can fix it with the next stable release. Then the Ceph configuration will be up-to-date everywhere and we don't need to do it as a workaround.

berendt commented 7 months ago

@yeoldegrove @brueggemann For now, we would need to touch the existing /etc/ceph/ceph.conf once before the migration and merge the sections. I haven't had a chance to fix it yet.