Closed 1NoOne1 closed 5 years ago
cc @akasurde @dagwieers @dav1x @jctanner @nerzhul click here for bot help
It seems to be access related. I wonder if your user has the required permissions for what you want to do. Do you see any access-related events in vCenter ?
@dagwieers I don't see any access-related events in vCenter, in-fact the error is being taken from the vCenter. I only see the same error in vCenter too. [ Unable to access the virtual machine configuration: Unable to access file [xio-mgmt-lun02] Test_Templatecentos7/Test_Templatecentos7.vmtx ]
I used the same credentials in Ansible as well as in PowerCLi. Admin is the only user in my vCenter and is the root user.
How come this is working when I use vmware PowerCLi using the same parameters while not working from Ansible/ or Pyvmomi?
Any Update on this??? @dagwieers @Akasurde
Is the shared storage accessible to both clusters? I've run into a similar issue to this. The module attempts to clone the VM to the first sorted host in the cluster. If the host doesn't have access to the data store with the template on it, you'll get a similar error.
@dav1x I am not sure how to validate what you said from the ansible/pyvmomi perspective. The storage that I have is Xtrem IO LUNs. If I have the accessibility issues, then I should get the same error when manually deploy a VM from the template, correct? (But that is not the case here, I could deploy VM into another cluster from vcenter and powercli)
Not necessarily. Does each cluster have access to all of the LUNs in question?
Specifically from your error message: Does the first host sorted in vCenter client in cluster az2 have access to xio-mgmt-lun02?
Thanks for your reply @dav1x . I appreciate your time on this.
That's the setup that I have, look like the cluster is not have the direct access to the Template storage ( which is mgmt-lun). We created Luns for each Cluster and added them to the respective Clusters. We haven't added all the LUNs to all the clusters (Doesn't make any sense in adding all of them to all the clusters, instead use NFS/NAS share).
I am a bit confused here, when you said host should have access to mgmt-lun i.e, the host is trying to read the template for VM creation. Wouldn't that be handled by the Vcenter itself when cloning/copying the VMs/Templates from one LUN to another LUN.
Also, I see this problem only when using Vsphere Python SDK/Ansbile Modules. How can PowerCLI core is handling it gracefully? Any explanation is appreciated.
By default the datastore vmware_guest uses is located via the location of the template. You should be able to specify a destination datastore under disks.
Like this:
disk:
- size_gb: 60
datastore: "{{ vcenter_datastore }}"
type: thin
This should circumvent the datastore check and prompt vmware_guest to use the datastore in question. To answer your question, yes, I think the check for the datastore should be initiated by the host or cluster specified and not the vm object(template) in question. For your immediate use, the above should get around the issue for you.
I tried and still have the same issue.
failed: [localhost] (item={u'vm_name': u'testvm19', u'datacenter': u'aus6', u'vc_name': u'vcenter1', u'vm_template': u'Test_Templatecentos7', u'vm_nmask_3': u'255.255.255.0', u'vm_nmask_2': u'255.255.255.0', u'vm_nmask_1': u'255.255.255.0', u'vc_pass': u'!', u'vc_user': u'administrator@', u'datastore': u'xio-az1', u'port_group_1': u'd300-protected1', u'port_group_2': u'd301-protected2', u'port_group_3': u'd302-protected3', u'cluster': u'az1', u'dns_1': u'10.231.0.101', u'dns_2': u'10.231.0.103', u'folder': u'/az1', u'vm_gw_1': u'10.7.240.1', u'vm_ip_3': u'10.7.242.65', u'vm_ip_2': u'10.7.241.65', u'vm_ip_1': u'10.7.240.65'}) => {
"changed": true,
"failed": true,
"invocation": {
"module_args": {
"annotation": null,
"cluster": "az1",
"customization": {},
"customvalues": [],
"datacenter": "aus6",
**"disk": [
{
"datastore": "xio-az1",
"size_gb": 160,
"type": "thin"
}
],**
"esxi_hostname": null,
"folder": "/vm",
"force": false,
"guest_id": null,
"hardware": {
"memory_mb": 2048,
"num_cpus": 1,
"scsi": "paravirtual"
},
"hostname": "vcenter1",
"is_template": false,
"name": "testvm19",
"name_match": "first",
"networks": [
{
"device_type": "vmxnet3",
"dns_servers": [
"10.231.0.101",
"10.231.0.103"
],
"gateway": "10.7.240.1",
"ip": "10.7.240.65",
"name": "d300-protected1",
"netmask": "255.255.255.0"
},
{
"device_type": "vmxnet3",
"ip": "10.7.241.65",
"name": "d301-protected2",
"netmask": "255.255.255.0"
},
{
"device_type": "vmxnet3",
"ip": "10.7.242.65",
"name": "d302-protected3",
"netmask": "255.255.255.0"
}
],
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"resource_pool": null,
"state": "poweredon",
"template": "Test_Templatecentos7",
"template_src": "Test_Templatecentos7",
"username": "administrator@",
"uuid": null,
"validate_certs": false,
"wait_for_ip_address": true
}
},
"item": {
"cluster": "az1",
"datacenter": "aus6",
"datastore": "xio-az1",
"dns_1": "10.231.0.101",
"dns_2": "10.231.0.103",
"folder": "/az1",
"port_group_1": "d300-protected1",
"port_group_2": "d301-protected2",
"port_group_3": "d302-protected3",
"vc_name": "vcenter1",
"vc_pass": "!",
"vc_user": "administrator@",
"vm_gw_1": "10.7.240.1",
"vm_ip_1": "10.7.240.65",
"vm_ip_2": "10.7.241.65",
"vm_ip_3": "10.7.242.65",
"vm_name": "testvm19",
"vm_nmask_1": "255.255.255.0",
"vm_nmask_2": "255.255.255.0",
"vm_nmask_3": "255.255.255.0",
"vm_template": "Test_Templatecentos7"
},
"msg": "Unable to access the virtual machine configuration: Unable to access file [xio-mgmt-lun02] Test_Templatecentos7/Test_Templatecentos7.vmtx"
}
to retry, use: --limit @/home/ansible/playbooks/DeployVM.retry
PLAY RECAP **************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1
Let me try to recreate this. I'm successfully able to clone a template from the same cluster to a specified datastore. I'm thinking that may be the root cause to your issue.
Yeah, please.
I did mentioned that in my issue discription. The issue is only when I try to deploy it across the clusters. If I have the template in same cluster then no problems. However, this is really not the practical scenario. Assume that you just created a cluster and added hosts to it. I am unnecessarily creating a copy of existing template if I have to copy that template to the newly added cluster. Assume if I have N templates.
I have a 2 host cluster right now. I pulled a host from my cluster and created a new cluster. I dismounted my templates datastore where I have my templates. I can clone across clusters provided I specify disk options.
- name: create the vm
vmware_guest:
hostname: 10.x.x.25
username: administrator@vsphere.local
password: password
validate_certs: false
name: myhost
#state: shutdownguest
state: present
#state: absent
force: True
disk:
- size_gb: 60
datastore: ose3-vmware
type: thin
- size_gb: 60
datastore: ose3-vmware
type: thin
- size_gb: 60
datastore: ose3-vmware
type: thin
datacenter: Boston
cluster: test
template: ocp-server-template-2.0.2
annotation: "issue 28498"
changed: [localhost] => {
"changed": true,
"failed": false,
"instance": {
"annotation": "issue 28498",
"current_snapshot": null,
"customvalues": {},
"guest_tools_status": "guestToolsRunning",
"guest_tools_version": "10277",
"hw_eth0": {
"addresstype": "assigned",
"ipaddresses": [
"10.19.114.238",
"2620:52:0:1372:250:56ff:fea5:9a90",
"fe80::250:56ff:fea5:9a90"
],
"label": "Network adapter 1",
"macaddress": "00:50:56:a5:9a:90",
"macaddress_dash": "00-50-56-a5-9a-90",
"summary": "VM Network"
},
"hw_guest_full_name": "Red Hat Enterprise Linux 7 (64-bit)",
"hw_guest_id": "rhel7_64Guest",
"hw_interfaces": [
"eth0"
],
"hw_memtotal_mb": 4096,
"hw_name": "myhost",
"hw_power_status": "poweredOn",
"hw_processor_count": 2,
"hw_product_uuid": "42255b43-4ec6-4b21-71a7-5dd70972dd35",
"ipv4": "2620:52:0:1372:250:56ff:fea5:9a90",
"ipv6": "fe80::250:56ff:fea5:9a90",
"module_hw": true,
"snapshots": []
},
"invocation": {
"module_args": {
"annotation": "issue 28498",
"cluster": "test",
"customization": {},
"customvalues": [],
"datacenter": "Boston",
"disk": [
{
"datastore": "ose3-vmware",
"size_gb": 60,
"type": "thin"
},
{
"datastore": "ose3-vmware",
"size_gb": 60,
"type": "thin"
},
{
"datastore": "ose3-vmware",
"size_gb": 60,
"type": "thin"
}
],
"esxi_hostname": null,
"folder": "/vm",
"force": true,
"guest_id": null,
"hardware": {},
"hostname": "10.19.114.25",
"is_template": false,
"name": "myhost",
"name_match": "first",
"networks": [
{
"device_type": "vmxnet3",
"gateway": "10.19.115.254",
"ip": "10.19.114.238",
"name": "VM Network",
"netmask": "255.255.254.0"
}
],
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"resource_pool": null,
"state": "present",
"template": "ocp-server-template-2.0.2",
"template_src": "ocp-server-template-2.0.2",
"username": "administrator@vsphere.local",
"uuid": null,
"validate_certs": false,
"wait_for_ip_address": true
}
}
}
Can you post your playbook with the added disk list?
I have the playbook written as below :
If you are free and available, I can show you a demo over WebEx ( I mean If you are really free!).
---
- name: VM from template
hosts: localhost
gather_facts: true
connection: local
become: yes
vars_files:
- ../group_vars/vms
tasks:
- name: Set Fact Values
set_fact:
vm_driver_1: "{{ vm_driver_1 | default('vmxnet3') }}"
vm_driver_2: "{{ vm_driver_2 | default('vmxnet3') }}"
vm_driver_3: "{{ vm_driver_3 | default('vmxnet3') }}"
- name: create the VM from Template
vmware_guest:
hostname: "{{ item.vc_name }}"
username: "{{ item.vc_user }}"
password: "{{ item.vc_pass }}"
validate_certs: no
datacenter: "{{ item.datacenter }}"
cluster: "{{ item.cluster }}"
#folder: "{{ item.folder }}"
name: "{{ item.vm_name }}"
state: poweredon
disk:
- size_gb: 160
type: thin
datastore: "{{ item.datastore }}"
hardware:
memory_mb: 2048
num_cpus: 1
scsi: paravirtual
networks:
- name: "{{ item.port_group_1 }}"
ip: "{{ item.vm_ip_1 }}"
gateway: "{{ item.vm_gw_1 }}"
netmask: "{{ item.vm_nmask_1 }}"
device_type: "{{ vm_driver_1 }}"
dns_servers:
- "{{ item.dns_1 }}"
- "{{ item.dns_2 }}"
- name: "{{ item.port_group_2 }}"
ip: "{{ item.vm_ip_2 }}"
netmask: "{{ item.vm_nmask_2 }}"
device_type: "{{ vm_driver_2 }}"
- name: "{{ item.port_group_3 }}"
ip: "{{ item.vm_ip_3 }}"
netmask: "{{ item.vm_nmask_3 }}"
device_type: "{{ vm_driver_3 }}"
template: "{{ item.vm_template }}"
wait_for_ip_address: yes
register: deploy
with_items:
- "{{ vms }}"
Sure. I've got an hour before my next session. Send me an invite via email - davis.phillips@gmail.com
Thanks!
The above is the workflow that I am trying to execute. I really don't have any Templates/Library as in standard vCenter Template library. The template is placed in to one of the LUN.
Just to recap the webex, the root cause was the usage of a datastore cluster instead of the datastore name. I'll create a new issue for a feature request for datastore cluster support. Glad I could help out!
Thanks a lot @dav1x for your time. Could you please also add the Folder functionality as well.
Hey @1NoOne1 I tested folder functionality as well.
folder: "/vm/foo"
That clone across clusters to the foo folder in the root of the datacenter.
@dav1x I tested the folder functionality and somehow, It is not working for me. It is always deploying VMs under the root data center. I will have to look more into it.
Am also seeing this with 2.4, but I am specifying the datastore name
I'm seeing this as well with 2.4. In my case I'm specifying a cluster rather than hostname. it picked a completely different host in a completely different cluster to try to deploy to, hence the error. Statically assigning the hostname rather than cluster works fine. new bug in the host selection process?
cc @pdellaert click here for bot help
cc @warthog9 click here for bot help
@1NoOne1 Could you please re-try again with latest Ansible version and let us know if it works or not ? Since PR https://github.com/ansible/ansible/pull/35812 is merged, I think this will be resolved. Thanks
needs_info
cc @ckotte click here for bot help
Hello dav1x; I have the same problem and I have no resolution yet...can you assist please?
Within vCenter I can clone across clusters, but using ansible it fails. Error as below: "msg": "Failed to create a virtual machine : Unable to access the virtual machine configuration: Unable to access file [datastore_name] template_name/template_name.vmtx"}
@1NoOne1 This issue is waiting for your response. Please respond or the issue will be closed.
@Akasurde This looks working now in Ansible 2.7.4. :thumbsup:
Tested the following scenarios: [the documentation needed an update]
.....
.....
"vm_template": "centos7_k862"
},
"msg": "Folder is required parameter while deploying new virtual machine"
}
to retry, use: --limit @/home/ansible/playbooks/DeployVM1.retry
PLAY RECAP *******************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1
},
"msg": "No datacenter named ha-datacenter was found"
}
to retry, use: --limit @/home/ansible/playbooks/DeployVM1.retry
PLAY RECAP *** localhost : ok=2 changed=0 unreachable=0 failed=1
3. Target CLuster is needed as well, If you don't mention then it is going to fail.
},
"msg": "Failed to create a virtual machine : Unable to access the virtual machine configuration: Unable to access file [xio-az1-lun03]"
} to retry, use: --limit @/home/ansible/playbooks/DeployVM1.retry
PLAY RECAP *** localhost : ok=2 changed=0 unreachable=0 failed=1
4. :thumbsup: When folder,cluster(target resource cluster),datastore(A datastore cluster) and a datacenter is mentioned, It is SUCCESSFULLY deployed VM across the clusters.
**[TO REMEMBER: vCenter has a limit of 8 concurrent cloning operations across cluster. This is yet to be tested. :exclamation:]**
"vm_name": "testvm19",
"vm_nmask_1": "255.255.255.0",
"vm_nmask_2": "255.255.255.0",
"vm_nmask_3": "255.255.255.0",
"vm_template": "centos7_k862"
}
} META: ran handlers META: ran handlers
PLAY RECAP *** localhost : ok=3 changed=1 unreachable=0 failed=0
cc @lparkes click here for bot help
cc @Tomorrow9 click here for bot help
Just a comment, which might be helpful. Ansible trows this sometimes but is a red herring. In my case, the issue was using the wrong datastore name for the vm disks which has nothing to do with the playbook not being able to find the template file. This should be fixed, as it lead me to a goose chase for days. Find out it was my script creating the host file not setting the datastore correctly. First, check the datastore name you are using for your vm disks.
cc @goneri click here for bot help
cc @pgbidkar click here for bot help
Closing as per https://github.com/ansible/ansible/issues/28649#issuecomment-446354072. Please feel free to open a new issue if problem persists. Thanks.
@pgbidkar Thanks providing the information.
ISSUE TYPE
COMPONENT NAME
ANSIBLE VERSION
CONFIGURATION
OS / ENVIRONMENT
SUMMARY
STEPS TO REPRODUCE
Running the module vmware_guest to create a VM from a template gives the following error: "msg": "Unable to access the virtual machine configuration: Unable to access file [xio-mgmt-lun02] Test_Templatecentos7/Test_Templatecentos7.vmtx".
However, I am getting the above mentioned error.
I tried Deploying the VM from template in vCenter :: IT WORKED FINE ::
failed: [localhost] (item={u'vm_name': u'testvm20', u'datacenter': u'aus6', u'vc_name': u'vcenter.local', u'vm_template': u'Test_Templatecentos7', u'vm_nmask_3': u'255.255.255.0', u'vm_nmask_2': u'255.255.255.0', u'vm_nmask_1': u'255.255.255.0', u'vc_pass': u'Melody1!', u'vc_user': u'administrator', u'port_group_1': u'd300-protected1', u'port_group_2': u'd301-protected1', u'port_group_3': u'd302-protected1', u'cluster': u'az2', u'dns_1': u'10.231.0.101', u'dns_2': u'10.231.0.103', u'folder': u'/az2', u'vm_gw_1': u'10.7.240.1', u'vm_ip_3': u'10.7.242.66', u'vm_ip_2': u'10.7.241.66', u'vm_ip_1': u'10.7.240.66'}) => { "changed": true, "failed": true, "invocation": { "module_args": { "annotation": null, "cluster": "az2", "customization": {}, "customvalues": [], "datacenter": "aus2", "disk": [], "esxi_hostname": null, "folder": "/vm", "force": false, "guest_id": null, "hardware": { "memory_mb": 2048, "num_cpus": 1, "scsi": "paravirtual" }, "hostname": "vcenter.local", "is_template": false, "name": "testvm20", "name_match": "first", "networks": [ { "device_type": "vmxnet3", "dns_servers": [ "10.231.0.101", "10.231.0.103" ], "gateway": "10.7.240.1", "ip": "10.7.240.66", "name": "d300-protected1", "netmask": "255.255.255.0" }, { "device_type": "vmxnet3", "ip": "10.7.241.66", "name": "d301-protected2", "netmask": "255.255.255.0" }, { "device_type": "vmxnet3", "ip": "10.7.242.66", "name": "d302-protected3", "netmask": "255.255.255.0" } ], "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "resource_pool": null, "state": "poweredon", "template": "Test_Templatecentos7", "template_src": "Test_Templatecentos7", "username": "administrator", "uuid": null, "validate_certs": false, "wait_for_ip_address": true }
} to retry, use: --limit @/home/ansible/playbooks/DeployVM.retry
PLAY RECAP ** localhost : ok=2 changed=0 unreachable=0 failed=1