dmacvicar / terraform-provider-libvirt

Terraform provider to provision infrastructure with Linux's KVM using libvirt
Apache License 2.0
1.54k stars 457 forks source link

Make cloudinit disk SATA by default #1018

Open MatiasLGonzalez opened 1 year ago

MatiasLGonzalez commented 1 year ago

System Information

Linux distribution

Debian 11

Terraform version

Terraform v1.4.6
on linux_amd64
+ provider registry.terraform.io/dmacvicar/libvirt v0.7.1
+ provider registry.terraform.io/hashicorp/template v2.2.0

Provider and libvirt versions

./terraform-provider-libvirt_v0.7.1 0.7.1

Checklist

Description of Issue/Question

Setup

main.tf

provider "libvirt" {
  uri = "qemu+ssh://user@host/system"
}

resource "libvirt_pool" "debian" {
  name = "debian"
  type = "dir"
  path = var.libvirt_disk_path
}

resource "libvirt_volume" "debian-qcow2" {
  name   = "debian-qcow2"
  pool   = libvirt_pool.debian.name
  source = var.debian_11_img_url
  format = "qcow2"
}

data "template_file" "user_data" {
  template = file("${path.module}/config/cloud_init.yml")
}

data "template_file" "network_config" {
  template = file("${path.module}/config/network_config.yml")
}

resource "libvirt_cloudinit_disk" "commoninit" {
  name           = "commoninit.iso"
  user_data      = data.template_file.user_data.rendered
  network_config = data.template_file.network_config.rendered
  pool           = libvirt_pool.debian.name

}

resource "libvirt_domain" "domain-debian" {
  name    = var.vm_hostname
  memory  = "1024"
  vcpu    = 2
  machine = "q35"

  cloudinit = libvirt_cloudinit_disk.commoninit.id

  network_interface {
    bridge = "virbr0"
  }

  console {
    type        = "pty"
    target_port = "0"
    target_type = "serial"
  }

  console {
    type        = "pty"
    target_type = "virtio"
    target_port = "1"
  }

  disk {
    volume_id = libvirt_volume.debian-qcow2.id
    scsi      = "true"
  }

  graphics {
    type        = "vnc"
    listen_type = "address"
    autoport    = true
  }
}

Steps to Reproduce Issue

Error when running terraform apply

│ Error: error defining libvirt domain: unsupported configuration: IDE controllers are unsupported for this QEMU binary or machine type
│ 
│   with libvirt_domain.domain-debian,
│   on main.tf line 33, in resource "libvirt_domain" "domain-debian":
│   33: resource "libvirt_domain" "domain-debian" {
│ 

Additional information:

Do you have SELinux or Apparmor/Firewall enabled? Some special configuration? Have you tried to reproduce the issue without them enabled?

Discussion

This isn't really a issue more of a discussion, I was wondering if there would be any issue if the provider created the cloudinit disk as a sata device by default, therefor allowing the use of both Q35 and i44fx easily, from what I can tell both support sata controllers so I don't think many things would break by this change.

From libvirt/domain.go

func newDiskForCloudInit(virConn *libvirt.Libvirt, volumeKey string) (libvirtxml.DomainDisk, error) {
    disk := libvirtxml.DomainDisk{
        // HACK mark the disk as belonging to the cloudinit
        // resource so we can ignore it
        Serial: "cloudinit",
        Device: "cdrom",
        Target: &libvirtxml.DomainDiskTarget{
            // Last device letter possible with a single IDE controller on i440FX
            Dev: "hdd",
            Bus: "ide",
        },
        Driver: &libvirtxml.DomainDiskDriver{
            Name: "qemu",
            Type: "raw",
        },
    }

    diskVolume, err := virConn.StorageVolLookupByKey(volumeKey)
    if err != nil {
        return disk, fmt.Errorf("can't retrieve volume %s: %w", volumeKey, err)
    }
    diskVolumeFile, err := virConn.StorageVolGetPath(diskVolume)
    if err != nil {
        return disk, fmt.Errorf("error retrieving volume file: %w", err)
    }

    disk.Source = &libvirtxml.DomainDiskSource{
        File: &libvirtxml.DomainDiskSourceFile{
            File: diskVolumeFile,
        },
    }

    return disk, nil
}

So it would be like:

func newDiskForCloudInit(virConn *libvirt.Libvirt, volumeKey string) (libvirtxml.DomainDisk, error) {
    disk := libvirtxml.DomainDisk{
        // HACK mark the disk as belonging to the cloudinit
        // resource so we can ignore it
        Serial: "cloudinit",
        Device: "cdrom",
        Target: &libvirtxml.DomainDiskTarget{
            // Last device letter possible with a single IDE controller on i440FX
            Dev: "hdd",
            Bus: "sata",
        },
        Driver: &libvirtxml.DomainDiskDriver{
            Name: "qemu",
            Type: "raw",
        },
    }

    diskVolume, err := virConn.StorageVolLookupByKey(volumeKey)
    if err != nil {
        return disk, fmt.Errorf("can't retrieve volume %s: %w", volumeKey, err)
    }
    diskVolumeFile, err := virConn.StorageVolGetPath(diskVolume)
    if err != nil {
        return disk, fmt.Errorf("error retrieving volume file: %w", err)
    }

    disk.Source = &libvirtxml.DomainDiskSource{
        File: &libvirtxml.DomainDiskSourceFile{
            File: diskVolumeFile,
        },
    }

    return disk, nil
}
artur-borys commented 10 months ago

Hello, I've done this for my private purposes and it seems to work fine for q35 machine type.

However, I think it would be best to parametrize the cloudinit disk bus type. At first I wanted to make it somehow auto-detect the machine type, but I think that would be a little bit too risky.

Perhaps it's better to add a new field to the libvirt_domain resource, called cloudinit_bus. That would be the most backwards compatible change.

Alternatively, the cloudinit parameter could be changed to an object:

cloudinit {
  id: ...
  bus: "sata|ide|whatever"
}

Generally, I think another parameter could be added to newDiskForCloudInit. I know there are two references to that function and they'll have to get the new field from somewhere. I just tried to implement this and if I understood the code right, this should work (will check in a few moments).

EDIT: I had to "touch grass" for some time, but now I did manage to actually try it out - and it does seem to work. I haven't yet added any acceptance test as it's a broader topic I have to learn.

If anyone's interested, checkout my fork https://github.com/artur-borys/terraform-provider-libvirt/commit/78e99493c33bf25037e1e79b76b334bcdb566a6f - maybe someone can use it already

I hope I'll be able to write the acceptance test soon and create a PR

kubealex commented 6 months ago

It could be a general idea to make the bus configurable for any disk that is added, as, for instance, creating a Windows VM from ISO without embedded virtio drivers will require a SATA main disk for the setup.

pfworks commented 5 months ago

Is there a way to do this without compiling the plugin on my own? I'm trying to use this for aarch64 and am running into this issue.

artur-borys commented 5 months ago

Is there a way to do this without compiling the plugin on my own? I'm trying to use this for aarch64 and am running into this issue.

I don't think so, you'll have to compile it

pfworks commented 5 months ago

Thanks, alrready compiled and now I just have to figure out how to set up the EFI/nvram stuff.

pfworks commented 5 months ago

And thanks for having this... I would have never found this issue on my own, or it would have taken me a very long time. 8-)

kubealex commented 5 months ago

I tried to find a workaround and ended up editing it using XSL

DrPsychick commented 5 months ago

Using the XSLT patch works well. Using a vfat disk is also a working alternative: https://github.com/dmacvicar/terraform-provider-libvirt/pull/895#issuecomment-1911167872