stackhpc / ansible-role-libvirt-vm

This role configures and creates VMs on a KVM hypervisor.
128 stars 67 forks source link

image vs backing_image -> how can I use image that's already in the pool, but not as backing? #50

Open velis74 opened 4 years ago

velis74 commented 4 years ago

My VM host already has a template disk file in its storage pool. I would like to use that file as full copy template, not as COW backing image.

However, specifying 'image' parameter for storage volume expect the image file to be on the controller machine. OTOH, specifying 'backing_image' will use that image as COW backing image which I'm also not too fond of.

I'm looking at volumes.yml and I don't see how I could do what I want. Is it possible?

velis74 commented 4 years ago

An additional comment: I tried to use the image as backing_image, but that too fails for me: the newly created image is owned by root and libvirt can't access it.

How can I get around that?

Edit: owned by root is not an issue: libvirtd is running as root. I found that the machine only ran if I first copied the disk file (backing image also worked) and then created the machine from virt-manager - probably in the same manner as I was creating a template VM when I was installing the base image. So the actual problem seems to be somewhere in the VM creation parameters.

Stopping investigation: this will probably clash with your design. I do not expect a PR fixing my issue to be accepted. I will just use my existing "manual" playbook.

Edit2: there were significant differences between the generated VM definition files between a manually created VM and a VM created by this role.

As stated above: I will not investigate further unless there is interest.

markgoddard commented 4 years ago

I don't think I fully understand your requirements, but one thing that might help is that the image can be a URL.

velis74 commented 4 years ago

My requirement is such that I have a file already in the storage pool at the host machine. I want to use that file as direct copy template for the newly-being-created VM.

URL requires the image file to be downloaded every time. While I usually have fast internet on my hosts, it's still minutes with multi-GB images.

markgoddard commented 4 years ago

Ok, sounds like we just need to decouple the source image file and the location of the file for the volume. I don't think that should be too difficult, it just needs to be backwards compatible.

velis74 commented 4 years ago

I have since finished client provisioning. I could post the entire list of commands that do what I want. But they are incompatible with your design: I use libguestfs-tools to set hostname and the like and virtinst package for actual VM creation.

But, OTOH, I have the following additional features working:

Not sure if anything of the above is even in your design

markgoddard commented 4 years ago

Thanks for sharing. In the main use of this role (https://docs.openstack.org/kayobe/latest/) we use a configdrive, e.g. https://github.com/jriguera/ansible-role-configdrive to assign hostname, network config etc.

velis74 commented 4 years ago

For the sake of completeness, here's my final solution. Feel free to use any or none of it.

roles/provision_vm_guests/tasks/main.yaml:

main.zip

the playbook section:

- name: Enumerate guests for each VM host
  hosts: vm_hosts
  tasks:
    - set_fact:
        vm_guests: >
          {{ groups['all']|map('extract', hostvars)|selectattr('vm_host', 'defined')|list|
            json_query("[?vm_host=='" + inventory_hostname +"'].{
              state: 'present', name: hostname, memory_mb: ram_size, vcpus: cpu_count,
              volumes: [{ pool: 'default', name: hostname, device: 'disk', format: 'qcow2',
                          capacity: '2T', source_image: 'ansible-stub-btrfs.qcow2' }]
              interfaces: [{ network: 'default', mac: join('', ['52:54:00:19:74:', host_sequence]),
                             ip: join('', ['192.168.123.', host_sequence]) }],
              ansible_port: ansible_port,
            }")
          }}
      delegate_to: localhost
      run_once: true
#    - debug:
#        var: libvirt_vm_script_env

- name: Create guests for each VM host
  hosts: vm_hosts
  roles:
    - role: provision_vm_guests
      vars:
        guests: '{{ vm_guests }}'

AFAIC, this issue can be closed.