dmacvicar / terraform-provider-libvirt

Terraform provider to provision infrastructure with Linux's KVM using libvirt
Apache License 2.0
1.54k stars 457 forks source link

Error creating libvirt domain: Failed to lock byte 100 #882

Open KarlNord opened 2 years ago

KarlNord commented 2 years ago

System Information

Linux distribution

Ubuntu server 20.04

Terraform version

terraform -v
Terraform v1.0.7
on linux_amd64
+ provider registry.terraform.io/dmacvicar/libvirt v0.6.11
+ provider registry.terraform.io/hashicorp/template v2.2.0
+ provider registry.terraform.io/poseidon/ct v0.7.1

Provider and libvirt versions

terraform-provider-libvirt -version
terraform-provider-libvirt: command not found

If that gives you "was not built correctly", get the Git commit hash from your local provider repository:

git describe --always --abbrev=40 --dirty

Checklist

Description of Issue/Question

I am trying to implement a VM by following the documentation at:

https://kinvolk.io/docs/flatcar-container-linux/latest/installing/vms/libvirt/#terraform

terraform apply ...

libvirt_domain.machine["mynode"]: Creating...

│ Error: Error creating libvirt domain: internal error: qemu unexpectedly closed the monitor: 2021-09-17T18:10:41.352767Z qemu-system-x86_64: -blockdev {"node-name":"libvirt-2-format","read-only":true,"driver":"qcow2","file":"libvirt-2-storage","backing":null}: Failed to lock byte 100

karl@k8s:~$ sudo ls -l /var/tmp/mycluster-pool/ total 938640 -rw-r--r-- 1 libvirt-qemu kvm 960954584 Sep 17 14:58 flatcar-base -rw-r--r-- 1 root root 196824 Sep 17 14:58 mycluster-mynode-f1ef4d6e3b0a4d6683a28360515bcbdc.qcow2 -rw-r--r-- 1 root root 1018 Sep 17 14:58 mycluster-mynode-ignition

karl is in groups kvm and libvirt

karl@k8s:~$ groups karl adm cdrom sudo dip plugdev kvm lxd libvirt

karl@k8s:~$ virsh list --all Id Name State

karl@k8s:~$ virsh pool-list Name State Autostart

mycluster-pool active yes

Setup

(Please provide the full main.tf file for reproducing the issue (Be sure to remove sensitive information)

cat libvirt-machines.tf terraform { required_version = ">= 0.13" required_providers { libvirt = { source = "dmacvicar/libvirt"

version = "0.6.11"

}
ct = {
  source  = "poseidon/ct"
  version = "0.7.1"
}
template = {
  source  = "hashicorp/template"
  version = "~> 2.2.0"
}

} }

provider "libvirt" { uri = "qemu:///system" }

resource "libvirt_pool" "volumetmp" { name = "${var.cluster_name}-pool" type = "dir" path = "/var/tmp/${var.cluster_name}-pool" }

resource "libvirt_volume" "base" { name = "flatcar-base" source = var.base_image pool = libvirt_pool.volumetmp.name format = "qcow2" }

resource "libvirt_volume" "vm-disk" { for_each = toset(var.machines)

workaround: depend on libvirt_ignition.ignition[each.key], otherwise the VM will use the old disk when the user-data changes

name = "${var.cluster_name}-${each.key}-${md5(libvirt_ignition.ignition[each.key].id)}.qcow2" base_volume_id = libvirt_volume.base.id pool = libvirt_pool.volumetmp.name format = "qcow2" }

resource "libvirt_ignition" "ignition" { for_each = toset(var.machines) name = "${var.cluster_name}-${each.key}-ignition" pool = libvirt_pool.volumetmp.name content = data.ct_config.vm-ignitions[each.key].rendered }

resource "libvirt_domain" "machine" { for_each = toset(var.machines) name = "${var.cluster_name}-${each.key}" vcpu = var.virtual_cpus memory = var.virtual_memory

fw_cfg_name = "opt/org.flatcar-linux/config" coreos_ignition = libvirt_ignition.ignition[each.key].id

disk { volume_id = libvirt_volume.vm-disk[each.key].id }

graphics { listen_type = "address" }

dynamic IP assignment on the bridge, NAT for Internet access

network_interface { network_name = "default" wait_for_lease = true } }

data "ct_config" "vm-ignitions" { for_each = toset(var.machines) content = data.template_file.vm-configs[each.key].rendered }

data "template_file" "vm-configs" { for_each = toset(var.machines) template = file("${path.module}/machine-${each.key}.yaml.tmpl")

vars = { ssh_keys = jsonencode(var.ssh_keys) name = each.key } }

karl@k8s:~$ cat variables.tf variable "machines" { type = list(string) description = "Machine names, corresponding to machine-NAME.yaml.tmpl files" }

variable "cluster_name" { type = string description = "Cluster name used as prefix for the machine names" }

variable "ssh_keys" { type = list(string) description = "SSH public keys for user 'core'" }

variable "base_image" { type = string description = "Path to unpacked Flatcar Container Linux image flatcar_production_qemu_image.img (probably after a qemu-img resize IMG +5G)" }

variable "virtual_memory" { type = number default = 2048 description = "Virtual RAM in MB" }

variable "virtual_cpus" { type = number default = 1 description = "Number of virtual CPUs" }

karl@k8s:~$ cat machine-mynode.yaml.tmpl

passwd: users:

karl@k8s:~$ cat terraform.tfvars base_image = "file:///home/karl/Downloads/flatcar_production_qemu_image-libvirt-import.img" cluster_name = "mycluster" machines = ["mynode"] virtual_memory = 768 ssh_keys = ["ssh-rsa ... "]

Steps to Reproduce Issue

(Include debug logs if possible and relevant).

terraform init terraform apply


Additional information:

Do you have SELinux or Apparmor/Firewall enabled? Some special configuration?

karl@k8s:~$ tail -n 5 /etc/apparmor.d/abstractions/libvirt-qemu

Site-specific additions and overrides. See local/README for details.

include <local/abstractions/libvirt-qemu>

For ignition files

/var/lib/libvirt/flatcar-linux/ r, /var/tmp/mycluster-pool/ r,

Have you tried to reproduce the issue without them enabled? No