terraform-provider-libvirt -version
terraform-provider-libvirt: command not found
If that gives you "was not built correctly", get the Git commit hash from your local provider repository:
git describe --always --abbrev=40 --dirty
Checklist
[ ] Is your issue/contribution related with enabling some setting/option exposed by libvirt that the plugin does not yet support, or requires changing/extending the provider terraform schema?
[ ] Make sure you explain why this option is important to you, why it should be important to everyone. Describe your use-case with detail and provide examples where possible.
[ ] If it is a very special case, consider using the XSLT support in the provider to tweak the definition instead of opening an issue
[ ] Maintainers do not have expertise in every libvirt setting, so please, describe the feature and how it is used. Link to the appropriate documentation
[ ] Is it a bug or something that does not work as expected? Please make sure you fill the version information below:
Description of Issue/Question
I am trying to implement a VM by following the documentation at:
workaround: depend on libvirt_ignition.ignition[each.key], otherwise the VM will use the old disk when the user-data changes
name = "${var.cluster_name}-${each.key}-${md5(libvirt_ignition.ignition[each.key].id)}.qcow2"
base_volume_id = libvirt_volume.base.id
pool = libvirt_pool.volumetmp.name
format = "qcow2"
}
resource "libvirt_ignition" "ignition" {
for_each = toset(var.machines)
name = "${var.cluster_name}-${each.key}-ignition"
pool = libvirt_pool.volumetmp.name
content = data.ct_config.vm-ignitions[each.key].rendered
}
data "template_file" "vm-configs" {
for_each = toset(var.machines)
template = file("${path.module}/machine-${each.key}.yaml.tmpl")
vars = {
ssh_keys = jsonencode(var.ssh_keys)
name = each.key
}
}
karl@k8s:~$ cat variables.tf
variable "machines" {
type = list(string)
description = "Machine names, corresponding to machine-NAME.yaml.tmpl files"
}
variable "cluster_name" {
type = string
description = "Cluster name used as prefix for the machine names"
}
variable "ssh_keys" {
type = list(string)
description = "SSH public keys for user 'core'"
}
variable "base_image" {
type = string
description = "Path to unpacked Flatcar Container Linux image flatcar_production_qemu_image.img (probably after a qemu-img resize IMG +5G)"
}
variable "virtual_memory" {
type = number
default = 2048
description = "Virtual RAM in MB"
}
variable "virtual_cpus" {
type = number
default = 1
description = "Number of virtual CPUs"
}
System Information
Linux distribution
Ubuntu server 20.04
Terraform version
Provider and libvirt versions
If that gives you "was not built correctly", get the Git commit hash from your local provider repository:
Checklist
[ ] Is your issue/contribution related with enabling some setting/option exposed by libvirt that the plugin does not yet support, or requires changing/extending the provider terraform schema?
[ ] Make sure you explain why this option is important to you, why it should be important to everyone. Describe your use-case with detail and provide examples where possible.
[ ] If it is a very special case, consider using the XSLT support in the provider to tweak the definition instead of opening an issue
[ ] Maintainers do not have expertise in every libvirt setting, so please, describe the feature and how it is used. Link to the appropriate documentation
[ ] Is it a bug or something that does not work as expected? Please make sure you fill the version information below:
Description of Issue/Question
I am trying to implement a VM by following the documentation at:
https://kinvolk.io/docs/flatcar-container-linux/latest/installing/vms/libvirt/#terraform
terraform apply ...
libvirt_domain.machine["mynode"]: Creating...
│ Error: Error creating libvirt domain: internal error: qemu unexpectedly closed the monitor: 2021-09-17T18:10:41.352767Z qemu-system-x86_64: -blockdev {"node-name":"libvirt-2-format","read-only":true,"driver":"qcow2","file":"libvirt-2-storage","backing":null}: Failed to lock byte 100
karl@k8s:~$ sudo ls -l /var/tmp/mycluster-pool/ total 938640 -rw-r--r-- 1 libvirt-qemu kvm 960954584 Sep 17 14:58 flatcar-base -rw-r--r-- 1 root root 196824 Sep 17 14:58 mycluster-mynode-f1ef4d6e3b0a4d6683a28360515bcbdc.qcow2 -rw-r--r-- 1 root root 1018 Sep 17 14:58 mycluster-mynode-ignition
karl is in groups kvm and libvirt
karl@k8s:~$ groups karl adm cdrom sudo dip plugdev kvm lxd libvirt
karl@k8s:~$ virsh list --all Id Name State
karl@k8s:~$ virsh pool-list Name State Autostart
mycluster-pool active yes
Setup
(Please provide the full main.tf file for reproducing the issue (Be sure to remove sensitive information)
cat libvirt-machines.tf terraform { required_version = ">= 0.13" required_providers { libvirt = { source = "dmacvicar/libvirt"
version = "0.6.11"
} }
provider "libvirt" { uri = "qemu:///system" }
resource "libvirt_pool" "volumetmp" { name = "${var.cluster_name}-pool" type = "dir" path = "/var/tmp/${var.cluster_name}-pool" }
resource "libvirt_volume" "base" { name = "flatcar-base" source = var.base_image pool = libvirt_pool.volumetmp.name format = "qcow2" }
resource "libvirt_volume" "vm-disk" { for_each = toset(var.machines)
workaround: depend on libvirt_ignition.ignition[each.key], otherwise the VM will use the old disk when the user-data changes
name = "${var.cluster_name}-${each.key}-${md5(libvirt_ignition.ignition[each.key].id)}.qcow2" base_volume_id = libvirt_volume.base.id pool = libvirt_pool.volumetmp.name format = "qcow2" }
resource "libvirt_ignition" "ignition" { for_each = toset(var.machines) name = "${var.cluster_name}-${each.key}-ignition" pool = libvirt_pool.volumetmp.name content = data.ct_config.vm-ignitions[each.key].rendered }
resource "libvirt_domain" "machine" { for_each = toset(var.machines) name = "${var.cluster_name}-${each.key}" vcpu = var.virtual_cpus memory = var.virtual_memory
fw_cfg_name = "opt/org.flatcar-linux/config" coreos_ignition = libvirt_ignition.ignition[each.key].id
disk { volume_id = libvirt_volume.vm-disk[each.key].id }
graphics { listen_type = "address" }
dynamic IP assignment on the bridge, NAT for Internet access
network_interface { network_name = "default" wait_for_lease = true } }
data "ct_config" "vm-ignitions" { for_each = toset(var.machines) content = data.template_file.vm-configs[each.key].rendered }
data "template_file" "vm-configs" { for_each = toset(var.machines) template = file("${path.module}/machine-${each.key}.yaml.tmpl")
vars = { ssh_keys = jsonencode(var.ssh_keys) name = each.key } }
karl@k8s:~$ cat variables.tf variable "machines" { type = list(string) description = "Machine names, corresponding to machine-NAME.yaml.tmpl files" }
variable "cluster_name" { type = string description = "Cluster name used as prefix for the machine names" }
variable "ssh_keys" { type = list(string) description = "SSH public keys for user 'core'" }
variable "base_image" { type = string description = "Path to unpacked Flatcar Container Linux image flatcar_production_qemu_image.img (probably after a qemu-img resize IMG +5G)" }
variable "virtual_memory" { type = number default = 2048 description = "Virtual RAM in MB" }
variable "virtual_cpus" { type = number default = 1 description = "Number of virtual CPUs" }
karl@k8s:~$ cat machine-mynode.yaml.tmpl
passwd: users:
!/bin/bash
karl@k8s:~$ cat terraform.tfvars base_image = "file:///home/karl/Downloads/flatcar_production_qemu_image-libvirt-import.img" cluster_name = "mycluster" machines = ["mynode"] virtual_memory = 768 ssh_keys = ["ssh-rsa ... "]
Steps to Reproduce Issue
(Include debug logs if possible and relevant).
terraform init terraform apply
Additional information:
Do you have SELinux or Apparmor/Firewall enabled? Some special configuration?
karl@k8s:~$ tail -n 5 /etc/apparmor.d/abstractions/libvirt-qemu
Site-specific additions and overrides. See local/README for details.
include <local/abstractions/libvirt-qemu>
For ignition files
/var/lib/libvirt/flatcar-linux/ r, /var/tmp/mycluster-pool/ r,
Have you tried to reproduce the issue without them enabled? No