ceph / ceph-ansible

Ansible playbooks to deploy Ceph, the distributed filesystem.
Apache License 2.0
1.68k stars 1.01k forks source link

TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] Failed #3611

Closed imjustmatthew closed 4 years ago

imjustmatthew commented 5 years ago

Bug Report

What happened: When running the site playbook the following task fails on an existing cluster deployed with an older version of this playbook:

TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] *******************************************************************************************
fatal: [clotho.mtr.royhousehold.net]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'list object' has no attribute 'disabled_modules'\n\nThe error appears to have been in '/home/sevenofnine/ceph-ansible/roles/ceph-mgr/tasks/main.yml': line 28, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: set _disabled_ceph_mgr_modules fact\n  ^ here\n"}

I believe this is because this older version of ceph provides an unexpected format of "mgr module ls":

$ sudo ceph mgr module ls
[
    "status"
]

..which does not contain any disabled modules.

Share your group_vars files, inventory:

group_vars:


# Variables here are applicable to ceph-servers
ceph_private_interface: enp8s0
ceph_public_interface: enp3s0
ceph_private_netbase: "192.168.18"
ceph_public_netbase: "192.168.19"
ceph_public_mtu: 1500
ceph_private_mtu: 4096

fsid: 'xxxxxxxxxxxxxxxxxxxxxxxxxxx'
generate_fsid: false

#was: ceph_origin: upstream
ceph_origin: repository
ceph_repository: community
ceph_stable: true
ceph_stable_release: luminous

monitor_interface: "{{ ceph_public_interface }}"
journal_size: 5120 # in MB
public_network: "{{ ceph_public_netbase }}.0/24"
cluster_network: "{{ ceph_private_netbase }}.0/24"
#allow OSDs to discover and use new disks, then dm-crypt them with their journal
osd_auto_discovery: false #this is now broken
devices:
  - /dev/sdb
  - /dev/sdc
  - /dev/sdd
  - /dev/sde
  - /dev/sdf
  - /dev/sdg
osd_scenario: collocated
osd_objectstore: bluestore
dmcrypt: true

cephfs: cephfs # name of the ceph filesystem
cephfs_data: cephfs_data # name of the data pool for a given filesystem
cephfs_metadata: cephfs_metadata # name of the metadata pool for a given filesystem

cephfs_pools:
  - { name: "{{ cephfs_data }}", pgs: "128" }
  - { name: "{{ cephfs_metadata }}", pgs: "128" }

ceph_conf_orverrides:
  global:
    rbd default features: 1

Environment:

manojiruvuri commented 5 years ago

is the issue resolved for you guys? What was the fix ?

stale[bot] commented 5 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.