Closed jeevadotnet closed 2 years ago
@jeevadotnet can you try to rerun the playbook with the following ceph_conf_overrides
instead?
ceph_conf_overrides:
global:
osd_pool_default_size: 4
osd_pool_default_min_size: 3
osd_pool_default_pg_num: 32
osd_pool_default_pgp_num: 32
client.glance:
rbd default data pool: images_data
client.nova:
rbd default data pool: vms_data
client.cinder:
rbd default data pool: volumes_data
client.cinder-backup:
rbd default data pool: backups_data
mds:
mds_cache_memory_limit: 95899345920
mds_session_blacklist_on_timeout: false
osd:
osd_scrub_priority: 4
osd_memory_target: 6442450944
osd_max_scrubs: 1
osd_scrub_load_threshold: 10
osd_scrub_thread_suicide_timeout: 300
osd_scrub_max_interval: 2419200
osd_scrub_min_interval: 1209600
osd_deep_scrub_interval: 3024000
osd_deep_scrub_randomize_ratio: 0.01
osd_scrub_interval_randomize_ratio: 0.5
bluestore_warn_on_bluefs_spillover: false
osd_deep_scrub_stride: 524288
mon:
osd_max_scrubs: 1
osd_scrub_load_threshold: 10
auth_allow_insecure_global_id_reclaim: True
mgr:
osd_scrub_max_interval: 2419200
osd_scrub_min_interval: 1209600
osd_deep_scrub_interval: 3024000
osd_deep_scrub_randomize_ratio: 0.01
osd_scrub_interval_randomize_ratio: 0.5
mon_max_pg_per_osd: 400
mon_pg_warn_max_object_skew: 30
osd_deep_scrub_stride: 524288
client.rgw.A-08-02-storage.rgw0:
"rgw keystone api version": "3"
"rgw keystone url": "http://10.102.73.10:35357"
"rgw keystone accepted admin roles": "admin, ResellerAdmin"
"rgw keystone accepted roles": "Member, _member_, admin, ResellerAdmin"
"rgw keystone implicit tenants": "true"
"rgw keystone admin user": "ceph_rgw"
"rgw keystone admin password": "PASSWORD"
"rgw keystone admin project": "service"
"rgw keystone admin domain": "default"
"rgw keystone verify ssl": "false"
"rgw content length compat": "true"
"rgw enable apis": "s3, swift, swift_auth, admin"
"rgw s3 auth use keystone": "true"
"rgw enforce swift acls": "true"
"rgw swift account in url": "true"
"rgw swift versioning enabled": "true"
"rgw verify ssl": "false"
"rgw enable usage log": "true" # logging
"rgw usage log tick interval": "30" # logging
"rgw usage log flush threshold": "1024" # logging
@guits shouldn't it be client.rgw.{{ hostvars[inventory_hostname]['ansible_facts']['hostname'] }}.rgw0"
because I have 3x rgws clients as per the inventory. If I only do client.rgw.A-08-02-storage.rgw0
it will apply that to all the other 2 servers as well.
ceph_conf_overrides:
global:
osd_pool_default_size: 4
...
client.glance:
rbd default data pool: images_data
client.nova:
rbd default data pool: vms_data
client.cinder:
rbd default data pool: volumes_data
client.cinder-backup:
rbd default data pool: backups_data
mds:
mds_cache_memory_limit: 95899345920
....
osd:
osd_scrub_priority: 4
....
mon:
osd_max_scrubs: 1
....
mgr:
osd_scrub_max_interval: 2419200
....
client.rgw.A-08-02-storage.rgw0:
"rgw keystone api version": "3"
....
client.rgw.A-08-08-storage.rgw0:
"rgw keystone api version": "3"
....
client.rgw.A-09-02-storage.rgw0:
"rgw keystone api version": "3"
....
@guits I've done as instructed but have a new duplicate issue now.
If I don't define any rgw parameters it creates a duplicate under my first client.rgw.A-08-02-storage.rgw0:
rgw frontends = beast endpoint=10.102.51.13:8080
rgw frontends = beast endpoint=10.102.51.11:8080
When I define it as the group_vars below, it creates
rgw frontends = beast endpoint=10.102.51.13:7480
rgw frontends = beast endpoint=10.102.51.11:7480
I've tried a variety of combinations by setting the parameter or not setting it, each one resulted in a duplicate entry for rgw frontends
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.
I'm having the same issue, I try to set an override (to enable swift). and end up getting duplicate sections. One section for just the overrides and one section that's based off of https://github.com/ceph/ceph-ansible/blob/b40e4bfe60cb14a8eac225086f60d5b170636b6d/roles/ceph-config/templates/ceph.conf.j2#L89-L119
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.
further update is that it seems like having the same section defined multiple times is ok, I suspect any keys redefined in later sections override the earlier sections.
I suspect any keys redefined in later sections override the earlier sections.
yes, that's correct.
I'm sorry I don't have a lot of time for this at the moment, but I'll try to take a look at this as soon as possible...
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.
What happened: Running
adopt-cephadm.yml
against my testbed which is currentlypacfic
on ceph-ansiblestable-6.0
I then get the following ceph-ansible task issue:
From inspecting
/etc/ceph/ceph.conf
the following value [client.rgw.A-08-02-storage.rgw0] appears twice as per the errorHow does one work around this duplication that ceph-ansible created in the first place?
What you expected to happen: To run the playbook as intended.
How to reproduce it (minimal and precise):
ceph_conf_overrides:
Inventory Group_vars
Environment:
Ubuntu 20.04.4 LTS
uname -a
):Linux B-03-11-cephctl 5.4.0-122-generic
docker version
):20.10.12
ansible-playbook --version
):2.10.17
git head or tag or stable branch
):stable-6.0
ceph -v
):16.2.10