ansible-collections / community.vmware

Ansible Collection for VMware
GNU General Public License v3.0
348 stars 341 forks source link

add attach/detach functionality to the vmware_host_datastore module #1372

Open christian-naenny opened 2 years ago

christian-naenny commented 2 years ago
SUMMARY

The vmware_host_datastore module can be used to mount and unmount datastores from ESXi hosts. But I would also need the functionality to detach the datastore device after unmounting the VMFS filesystem. The same is true for the mount. I need to be able to attach the datastore device which is available but detached from a host before mounting it.

ISSUE TYPE
COMPONENT NAME

community.vmware.vmware_host_datastore

ADDITIONAL INFORMATION

Basically what I would like to to is what I have been using the PowerShell Cmdlet to do: Here's a snippet of my PowerShell Code:

foreach ($ESXiHostName in ($ESXiHostNames | Sort-Object)) {
    $vmHost     = Get-VMHost -Name $ESXiHostName
    $hostView   = Get-View $vmHost
    $storageSys = Get-View $hostView.ConfigManager.StorageSystem
    $devices    = $storageSys.StorageDeviceInfo.ScsiLun

    foreach ($datastoreName in ($dataStoreNames | Sort-Object)) {
        foreach ($device in ($devices | Sort-Object)) {
            if ($device.DisplayName -eq $datastoreName) {
                $LunUUID = $device.Uuid
                $state   = $device.operationalState[0]

                if ($state -eq "off") {
                    debug "scheduling datastore $datastoreName for attachment to host $ESXiHostName..."
                    # add necessary information to workElements hash
                    $key = "${LunUUID}:${vmHostName}"
                    $workElements.Add($key,$storageSys)
                } elseif ($state -eq "ok") {
                    debug "The datastore $datastoreName is already attached to the host $vmHostName, skipping it..."
                } else {
                    error "The datastore $datastoreName does not have the right operational state to be attached to host $ESXiHostName..."
                }
            } #end if device name eq datastore name
        } # end foreach device
    } # end foreach datastore
} # end foreach ESXiHost

if ($workElements.Count -gt 0) {
    # now execute the actual work in parallel using the workElements hash
    info "now attaching all datastore devices to all hosts with a maximum of parallel actions of ${limit}"
    $workElements.GetEnumerator() | ForEach-Object -Parallel { ($_.Value).AttachScsiLun($($_.Key.split(":"))[0]) } -ThrottleLimit $limit
}

This is the playbook snippet I would expect to work to first attach the datastore device and then to mount the VMFS filesystem. There is a lot of LDAP lookups as a lot of information about our systems is stored in our LDAP instance!

---
- name: "mount datastores of a given diskpool on all ESXi hosts of the cluster"
  gather_facts: no
  hosts: localhost
  vars_files:
    - ../lib/ldap-data.yml
    - ../lib/vcenter-login.yml
  vars:
    diskpool: "sagaz_te03"
    login: &login
      username: "{{ vcenter_user }}"
      password: "{{ vcenter_pass }}"

  tasks:
    - name: "search for VMs on given diskpool {{ diskpool }}"
      community.general.ldap_search:
        dn: "ou=Server,ou=Infrastruktur,ou=Informatik,DC=SNB,DC=CH"
        bind_dn: ""
        filter: "(&(objectclass=snbvmwarevm)(snbvmdiskpool=*{{ diskpool }}*))"
        scope: "children"
        server_uri: "{{ ldap_server }}"
        start_tls: no
        attrs:
          - snbhostname
          - snbcurrenthost
          - snbvcenterhost
      register: vm_attrs
      check_mode: no         # this task is run, even if check_mode is selected!

    - name: display the results
      ansible.builtin.debug:
        msg: ["snbhostname:    {{ vm_attrs.results[0].snbhostname }}",
              "snbcurrenthost: {{ vm_attrs.results[0].snbcurrenthost }}",
              "snbvcenterhost: {{ vm_attrs.results[0].snbvcenterhost }}"
        ]
      check_mode: no         # this task is run, even if check_mode is selected!

    - name: get the datacentername from the currenthost of the first VM
      community.general.ldap_search:
        dn: "{{ vm_attrs.results[0].snbcurrenthost }}"
        bind_dn: ""
        filter: "(cn=*)"
        scope: "children"
        server_uri: "{{ ldap_server }}"
        start_tls: no
        attrs:
          - snbvidatacentername
      register: datacenter
      check_mode: no         # this task is run, even if check_mode is selected!

    - name: "get the domain name of the first ESXi host of the list"
      community.general.ldap_search:
        dn: "ou=Cluster,ou=Infrastruktur,ou=Informatik,DC=SNB,DC=CH"
        bind_dn: ""
        filter: "(snbmemberhosts={{ vm_attrs.results[0].snbcurrenthost }})"
        scope: "children"
        server_uri: "{{ ldap_server }}"
        start_tls: no
        attrs:
          - dn
          - description
          - snbmemberhosts
      register: cluster_attrs
      check_mode: no         # this task is run, even if check_mode is selected!

    - name: "get the hostnames of the cluster members"
      community.general.ldap_search:
        dn: "{{ item }}"
        bind_dn: ""
        filter: "(objectclass=snbesxserver)"
        scope: "children"
        server_uri: "{{ ldap_server }}"
        start_tls: no
        attrs:
          - snbhostname
          - cn
      register: esx_hostnames
      loop: "{{ cluster_attrs.results[0].snbmemberhosts }}"
      check_mode: no         # this task is run, even if check_mode is selected!

    - name: "show ESXi hostnames for diskpool {{ diskpool }}"
      ansible.builtin.debug:
        msg: "ESXi hostname: {{ item[0] }}"
      loop: "{{ esx_hostnames | community.general.json_query('results[*].results[*].cn') }}"
      check_mode: no         # this task is run, even if check_mode is selected!

    - name: "get the members of the given diskpool {{ diskpool }}"
      community.general.ldap_search:
        dn: "ou=Diskpools,ou=Speichersysteme,ou=Infrastruktur,ou=Informatik,DC=SNB,DC=CH"
        bind_dn: ""
        filter: "(cn={{ diskpool }})"
        scope: "children"
        server_uri: "{{ ldap_server }}"
        start_tls: no
        attrs:
          - snbpoolmember
      register: datastores
      check_mode: no         # this task is run, even if check_mode is selected!

    - name: "show datastores for diskpool {{ diskpool }}"
      ansible.builtin.debug:
        msg: "datastore: {{ item }}"
      loop: "{{ datastores.results[0].snbpoolmember }}"
      check_mode: no         # this task is run, even if check_mode is selected!

    - name: Gather info about vmhbas of all ESXi Host in the given Cluster
      community.vmware.vmware_host_disk_info:
        hostname: "{{ vm_attrs.results[0].snbvcenterhost }}"
        <<: *login
        esxi_hostname: "{{ item[0] | join }}"
      register: cluster_host_vmhbas
      loop: "{{ esx_hostnames | community.general.json_query('results[*].results[*].cn') }}"
      check_mode: no         # this task is run, even if check_mode is selected!

    - name: "test the lookup of the canonical_name"
      ansible.builtin.debug:
        msg: "{{ cluster_host_vmhbas.results[0] | community.general.json_query('hosts_disk_info.*[*] | [0][?display_name==`' + item + '`].canonical_name | [0]') }}"
      loop: "{{ datastores.results[0].snbpoolmember }}"
      check_mode: no         # this task is run, even if check_mode is selected!

    - name: mount/unmount datastores
      community.vmware.vmware_host_datastore:
        datastore_name: "{{ item[1] }}"
        datastore_type: vmfs
        esxi_hostname: "{{ item[0] | join }}"
        vmfs_device_name: "{{ cluster_host_vmhbas.results[0] | community.general.json_query('hosts_disk_info.*[*] | [0][?display_name==`' + item[1] + '`].canonical_name | [0]') }}"
        vmfs_version: 6
        state: present
        hostname: "{{ vm_attrs.results[0].snbvcenterhost }}"
        <<: *login
      loop: "{{ esx_hostnames | community.general.json_query('results[*].results[*].cn') | product(datastores.results[0].snbpoolmember) | list }}"
ansibullbot commented 2 years ago

Files identified in the description: None

If these files are inaccurate, please update the component name section of the description or use the !component bot command.

click here for bot help

mkarel commented 1 year ago

We need this same feature so we can remove RDM LUNS as well. Our current playbook removes the RDM from the VM but are having to rip the storage away from the host as a part of our nightly environment refresh. This causes APD events and would love to be able to properly detach luns.

christian-naenny commented 1 year ago

Any update on this issue? Is there any chance this will be selected for development anytime soon?

spretorius85 commented 1 week ago

Also need this feature would be great if it could be selected for development.