Open ShyamsundarR opened 7 years ago
Hi @ShyamsundarR the backend-rest works for me. These are the variables used in group_vars/all :
lvs:
- GLUSTER_lv1
- GLUSTER_lv2
master: 10.70.42.122
mountpoints:
- /gluster/brick1
- /gluster/brick2
pvs:
- /dev/vdb
- /dev/vdc
unmount:
- 'yes'
vgs:
- GLUSTER_vg1
- GLUSTER_vg2
And this is the playbook:
---
- hosts: gluster_servers
remote_user: root
gather_facts: no
tasks:
- name: Cleans up backend
backend_reset: pvs="{{ pvs }}"
vgs="{{ vgs }}"
lvs="{{ lvs }}"
unmount="{{ unmount }}"
mountpoints="{{ mountpoints }}"
@ShyamsundarR this worked too:
---
- hosts: gluster_servers
remote_user: root
gather_facts: no
tasks:
- name: Cleans up backend
backend_reset: pvs="{{ item }}"
unmount="yes"
with_items:
- /dev/vdb
- /dev/vdc
The above play does not remove the said PV, when the PV is present (additional VG,LV, mounts being present or not does not make a difference.
Changing the with_items as follows also does not help,
or changing pvs to,
also fails
The issue seems to stem from the literav_eval as pointed in the issue #247 which evaluates the end self.pvs to None, and hence nothing is done on the system, and further the play returns as a success.
The diff as follows, helps the situation and works for the above play, as well as for plays that pass in a list of devices,
The above is not submitted as a PR as I assume gdeploy may not function given the changes, and as this is being used from a play and that works, it is left here for consideration when making backend_reset a module for ansible.