ceph / ceph-ansible

Ansible playbooks to deploy Ceph, the distributed filesystem.
Apache License 2.0
1.66k stars 1.01k forks source link

Unhandled exception from module 'dashboard' while running on mgr.hosts #3449

Closed ixiaoyi93 closed 5 years ago

ixiaoyi93 commented 5 years ago

Bug Report

What happened:

2018-12-17 11:39:43.120050 [ERR]  Unhandled exception from module 'dashboard' while running on mgr.k8store01: IOError("Port 9000 not bound on '::'",)
2018-12-17 11:23:04.636503 [ERR]  Unhandled exception from module 'dashboard' while running on mgr.k8store03: IOError("Port 9000 not bound on '::'",)
2018-12-17 11:22:14.467886 [INF]  Manager daemon k8store03 is now available
2018-12-17 11:22:14.313213 [INF]  Active manager daemon k8store03 restarted 

What you expected to happen:

How to reproduce it (minimal and precise):

Share your group_vars files, inventory

$ egrep -v "^#|^$" group_vars/all.yml
---
dummy:
ceph_origin: repository
ceph_repository: community
ceph_mirror: https://mirrors.aliyun.com/ceph
ceph_stable_key: https://mirrors.aliyun.com/ceph/keys/release.asc
ceph_stable_release: luminous
ceph_stable_repo: "{{ ceph_mirror }}/rpm-{{ ceph_stable_release }}"
fsid: 17ffc828-5d8c-4937-a5bb-f6adb2384d20
generate_fsid: true
ceph_conf_key_directory: /etc/ceph
cephx: true
monitor_interface: bond0
public_network: 172.17.0.0/16
cluster_network: 172.17.0.0/16
ceph_conf_overrides:
  global:
    rbd_default_features: 7
    auth cluster required: cephx
    auth service required: cephx
    auth client required: cephx
    osd journal size: 2048
    osd pool default size: 3
    osd pool default min size: 1
    mon_pg_warn_max_per_osd: 1024
    osd pool default pg num: 1024
    osd pool default pgp num: 1024
    max open files: 131072
    osd_deep_scrub_randomize_ratio: 0.01
  mon:
    mon_allow_pool_delete: true
  client:
    rbd_cache: true
    rbd_cache_size: 335544320
    rbd_cache_max_dirty: 134217728
    rbd_cache_max_dirty_age: 10
  mgr:
    mgr modules: dashboard
  osd:
    osd mkfs type: xfs
    ms_bind_port_max: 7100
    osd_client_message_size_cap: 2147483648
    osd_crush_update_on_start: true
    osd_deep_scrub_stride: 131072
    osd_disk_threads: 4
    osd_map_cache_bl_size: 128
    osd_max_object_name_len: 256
    osd_max_object_namespace_len: 64
    osd_max_write_size: 1024
    osd_op_threads: 8
    osd_recovery_op_priority: 1
    osd_recovery_max_active: 1
    osd_recovery_max_single_start: 1
    osd_recovery_max_chunk: 1048576
    osd_recovery_threads: 1
    osd_max_backfills: 4
    osd_scrub_begin_hour: 23
    osd_scrub_end_ho

$ egrep -v "^#|^$" group_vars/osds.yml
---
dummy:
devices:
  - /dev/sdb
  - /dev/sdc
  - /dev/sdd
  - /dev/sde
  - /dev/sdf
  - /dev/sdg
osd_scenario: collocated
osd_objectstore: bluestoreur: 7

$ egrep -v "^#|^$" site.yml
---
- hosts:
  - mons
  - osds
  - mdss
  - clients
  - mgrs

No changes have been made to the other documents.

Environment:

leseb commented 5 years ago

This doesn't look like a ceph-ansible bug, more a ceph one, please open an issue on tracker.ceph.com. Thanks.