ceph / ceph-ansible

Ansible playbooks to deploy Ceph, the distributed filesystem.
Apache License 2.0
1.68k stars 1.01k forks source link

stable-3.2 ansible.parsing.yaml.objects.AnsibleUnicode object' has no attribute 'stdout' #3856

Closed cjd9023 closed 5 years ago

cjd9023 commented 5 years ago

ceph-ansible version: stable 3.2 ansible version: 2.6.15

# ansible-playbook site.yml    # (only mons/osds)
TASK [ceph-config : generate ceph configuration file: ceph.conf] ******************************************************************************************************************
Sunday 14 April 2019  22:03:31 +0800 (0:00:00.312)       0:02:03.426 ********** 
fatal: [100.75.34.3]: FAILED! => {"msg": "'ansible.parsing.yaml.objects.AnsibleUnicode object' has no attribute 'stdout'"}
fatal: [100.75.34.5]: FAILED! => {"msg": "'ansible.parsing.yaml.objects.AnsibleUnicode object' has no attribute 'stdout'"}
**ok: [100.75.34.8]**

NO MORE HOSTS LEFT ****************************************************************************************************************************************************************

PLAY RECAP ************************************************************************************************************************************************************************
100.75.34.3                : ok=81   changed=4    unreachable=0    failed=1   
100.75.34.5                : ok=75   changed=4    unreachable=0    failed=1   
100.75.34.8                : ok=76   changed=4    unreachable=0    failed=0

role files: ceph-ansible-stable-3.2/roles/ceph-config/tasks/main.yml

3 host install luminous , clone with same image (openstack),2 fail ,1 success (generate ceph.conf) even execute with -vvvv ,still can't find reason.

ask for help! thx

# cat group_vars/mons.yml|grep -v "^$"|grep -v "^#"

---
dummy:
monitor_keyring: "AQDtH7NcAAAAABAAvxeiHD+Yg2agjKs7L61d+w=="

# cat group_vars/osds.yml|grep -v "^$"|grep -v "^#"

---
dummy:
osd_scenario: lvm
lvm_volumes:
  - data: vgbcache0-lvbcache0
    data_vg: vgbcache0
    db: vgwal_db_h-lvblockdb_vdd
    db_vg: vgwal_db_h
    wal: vgwal_db_h-lvwal_vdd
    wal_vg: vgwal_db_h
  - data: vgbcache1-lvbcache1
    data_vg: vgbcache1
    db: vgwal_db_h-lvblockdb_vde
    db_vg: vgwal_db_h
    wal: vgwal_db_h-lvwal_vde
    wal_vg: vgwal_db_h
  - data: vgbcache2-lvbcache2
    data_vg: vgbcache2
    db: vgwal_db_i-lvblockdb_vdf
    db_vg: vgwal_db_i
    wal: vgwal_db_i-lvwal_vdf
    wal_vg: vgwal_db_i
  - data: vgbcache3-lvbcache3
    data_vg: vgbcache3
    db: vgwal_db_i-lvblockdb_vdg
    db_vg: vgwal_db_i
    wal: vgwal_db_i-lvwal_vdg 
    wal_vg: vgwal_db_i

# cat group_vars/all.yml|grep -v "^$"|grep -v "^#"

---
dummy:
ceph_origin: repository
ceph_repository: community
ceph_mirror: http://mirrors.163.com/ceph
ceph_stable_key: http://mirrors.163.com/ceph/keys/release.asc
ceph_stable_release: luminous
ceph_stable_repo: "{{ ceph_mirror }}/rpm-{{ ceph_stable_release }}"
ceph_conf_key_directory: /etc/ceph
cephx: true
rbd_cache: "false"
monitor_interface: eth0
public_network: 100.75.34.0/0
cluster_network: 100.75.34.0/0
osd_objectstore: bluestore
ceph_conf_overrides: 
  global:
    auth_cluster_required: cephx
    auth_service_required: cephx
    auth_client_required: cephx
    osd_pool_default_size: 3
    osd_pool_default_min_size: 1
    osd_pool_default_pg_num: 512
    osd_pool_default_pgp_num: 512
  mon:
    mon_clock_drift_allowed: 0.5
  osd:
    osd_op_threads: 16
    osd_disk_threads: 4
    osd_max_backfills: 1
    osd_recovery_op_priority: 1
  client:
    rbd_cache: false
    rbd_cache_writethrough_until_flush: false
    rbd_default_format: 2
os_tuning_params:
  - { name: kernel.pid_max, value: 4194303 }
  - { name: fs.file-max, value: 26234859 }
  - { name: vm.swappiness, value: 0 }
guits commented 5 years ago

@cjd9023 please, share the full playbook log thanks!

cjd9023 commented 5 years ago

nohup_error.log

thx!

Kallio commented 5 years ago

missing space after : in somewhere?

stale[bot] commented 5 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.