dmacvicar / terraform-provider-libvirt

Terraform provider to provision infrastructure with Linux's KVM using libvirt
Apache License 2.0
1.54k stars 457 forks source link

Terraform output is broken in 0.7.4. Can't pull the IP address information. #1037

Closed southsidedean closed 7 months ago

southsidedean commented 8 months ago

System Information

Linux distribution

Ubuntu 22.04

Terraform version

I've rolled back to 1.5.7 to test 0.7.4 against 0.7.0/0.7.1, due to the signing key issues in the older versions with terraform 1.6.0/1.6.1.

Terraform v1.5.7
on linux_amd64

Provider and libvirt versions

provider = 0.7.4

Package: libvirt-daemon Version: 8.0.0-1ubuntu7.7


Checklist

Description of Issue/Question

Terraform output is broken in 0.7.4. Can't pull the IP address information. I've double-checked that the qemu-agent is installed and running, etc.

Output from a 0.7.4 run yields:

tdean@monolith:~/tf-temp/cka-d-cluster-builder-lab$ terraform apply cka-plan.plan
module.worker.libvirt_volume.base-volume-qcow2[0]: Creating...
module.controlplane.libvirt_volume.base-volume-qcow2[0]: Creating...
module.worker.libvirt_cloudinit_disk.commoninit[0]: Creating...
module.worker.libvirt_cloudinit_disk.commoninit[1]: Creating...
module.controlplane.libvirt_cloudinit_disk.commoninit[0]: Creating...
module.controlplane.libvirt_volume.base-volume-qcow2[0]: Still creating... [10s elapsed]
module.worker.libvirt_volume.base-volume-qcow2[0]: Still creating... [10s elapsed]
module.worker.libvirt_cloudinit_disk.commoninit[0]: Still creating... [10s elapsed]
module.worker.libvirt_cloudinit_disk.commoninit[1]: Still creating... [10s elapsed]
module.controlplane.libvirt_cloudinit_disk.commoninit[0]: Still creating... [10s elapsed]
module.controlplane.libvirt_volume.base-volume-qcow2[0]: Still creating... [20s elapsed]
module.worker.libvirt_volume.base-volume-qcow2[0]: Still creating... [20s elapsed]
module.worker.libvirt_cloudinit_disk.commoninit[0]: Still creating... [20s elapsed]
module.worker.libvirt_cloudinit_disk.commoninit[1]: Still creating... [20s elapsed]
module.controlplane.libvirt_cloudinit_disk.commoninit[0]: Still creating... [20s elapsed]
module.controlplane.libvirt_volume.base-volume-qcow2[0]: Creation complete after 24s [id=/media/virtual-machines/control-plane--base.qcow2]
module.controlplane.libvirt_volume.volume-qcow2[0]: Creating...
module.worker.libvirt_volume.base-volume-qcow2[0]: Still creating... [30s elapsed]
module.worker.libvirt_cloudinit_disk.commoninit[0]: Still creating... [30s elapsed]
module.worker.libvirt_cloudinit_disk.commoninit[1]: Still creating... [30s elapsed]
module.controlplane.libvirt_cloudinit_disk.commoninit[0]: Still creating... [30s elapsed]
module.controlplane.libvirt_volume.volume-qcow2[0]: Still creating... [10s elapsed]
module.worker.libvirt_volume.base-volume-qcow2[0]: Still creating... [40s elapsed]
module.worker.libvirt_cloudinit_disk.commoninit[0]: Still creating... [40s elapsed]
module.controlplane.libvirt_cloudinit_disk.commoninit[0]: Still creating... [40s elapsed]
module.worker.libvirt_cloudinit_disk.commoninit[1]: Still creating... [40s elapsed]
module.controlplane.libvirt_volume.volume-qcow2[0]: Still creating... [20s elapsed]
module.worker.libvirt_volume.base-volume-qcow2[0]: Creation complete after 47s [id=/media/virtual-machines/worker-node--base.qcow2]
module.worker.libvirt_volume.volume-qcow2[1]: Creating...
module.worker.libvirt_volume.volume-qcow2[0]: Creating...
module.worker.libvirt_cloudinit_disk.commoninit[0]: Creation complete after 47s [id=/media/virtual-machines/worker-node-_init01.iso;c135ed37-ef2e-4626-93c2-517049927c02]
module.controlplane.libvirt_cloudinit_disk.commoninit[0]: Creation complete after 47s [id=/media/virtual-machines/control-plane-_init01.iso;ca3e7717-1c3c-4b99-a8c0-353348ef310f]
module.worker.libvirt_cloudinit_disk.commoninit[1]: Creation complete after 47s [id=/media/virtual-machines/worker-node-_init02.iso;ca9cfaf3-0097-4502-9f17-02dbcdf26cca]
module.controlplane.libvirt_volume.volume-qcow2[0]: Creation complete after 22s [id=/media/virtual-machines/control-plane-01.qcow2]
module.controlplane.libvirt_domain.virt-machine[0]: Creating...
module.worker.libvirt_volume.volume-qcow2[1]: Creation complete after 0s [id=/media/virtual-machines/worker-node-02.qcow2]
module.worker.libvirt_volume.volume-qcow2[0]: Creation complete after 0s [id=/media/virtual-machines/worker-node-01.qcow2]
module.worker.libvirt_domain.virt-machine[0]: Creating...
module.worker.libvirt_domain.virt-machine[1]: Creating...
╷
│ Error: couldn't retrieve IP address of domain id: a7983281-7f04-477d-8bfa-e9812922a490. Please check following: 
│ 1) is the domain running proplerly? 
│ 2) has the network interface an IP address? 
│ 3) Networking issues on your libvirt setup? 
│  4) is DHCP enabled on this Domain's network? 
│ 5) if you use bridge network, the domain should have the pkg qemu-agent installed 
│ IMPORTANT: This error is not a terraform libvirt-provider error, but an error caused by your KVM/libvirt infrastructure configuration/setup 
│  error retrieving interface addresses: error retrieving interface addresses: Guest agent is not responding: QEMU guest agent is not connected
│ 
│   with module.controlplane.libvirt_domain.virt-machine[0],
│   on .terraform/modules/controlplane/main.tf line 11, in resource "libvirt_domain" "virt-machine":
│   11: resource "libvirt_domain" "virt-machine" {
│ 
╵
╷
│ Error: couldn't retrieve IP address of domain id: 029a923b-85ad-4595-ae73-9e13b9b4de92. Please check following: 
│ 1) is the domain running proplerly? 
│ 2) has the network interface an IP address? 
│ 3) Networking issues on your libvirt setup? 
│  4) is DHCP enabled on this Domain's network? 
│ 5) if you use bridge network, the domain should have the pkg qemu-agent installed 
│ IMPORTANT: This error is not a terraform libvirt-provider error, but an error caused by your KVM/libvirt infrastructure configuration/setup 
│  error retrieving interface addresses: error retrieving interface addresses: Guest agent is not responding: QEMU guest agent is not connected
│ 
│   with module.worker.libvirt_domain.virt-machine[0],
│   on .terraform/modules/worker/main.tf line 11, in resource "libvirt_domain" "virt-machine":
│   11: resource "libvirt_domain" "virt-machine" {
│ 
╵
╷
│ Error: couldn't retrieve IP address of domain id: e3f29889-7147-47c8-b211-f8322c0a33bc. Please check following: 
│ 1) is the domain running proplerly? 
│ 2) has the network interface an IP address? 
│ 3) Networking issues on your libvirt setup? 
│  4) is DHCP enabled on this Domain's network? 
│ 5) if you use bridge network, the domain should have the pkg qemu-agent installed 
│ IMPORTANT: This error is not a terraform libvirt-provider error, but an error caused by your KVM/libvirt infrastructure configuration/setup 
│  error retrieving interface addresses: error retrieving interface addresses: Guest agent is not responding: QEMU guest agent is not connected
│ 
│   with module.worker.libvirt_domain.virt-machine[1],
│   on .terraform/modules/worker/main.tf line 11, in resource "libvirt_domain" "virt-machine":
│   11: resource "libvirt_domain" "virt-machine" {
│ 

The objects appear to be created without issue with 0.7.4, other than the outputs being broken.

Rolling back by setting the provider version to 0.7.0/0.7.1 with terraform 1.5.7 works fine:

tdean@monolith:~/tf-temp/cka-d-cluster-builder-lab$ terraform apply cka-plan.plan
module.controlplane.libvirt_volume.base-volume-qcow2[0]: Creating...
module.worker.libvirt_volume.base-volume-qcow2[0]: Creating...
module.worker.libvirt_cloudinit_disk.commoninit[0]: Creating...
module.controlplane.libvirt_cloudinit_disk.commoninit[0]: Creating...
module.worker.libvirt_cloudinit_disk.commoninit[1]: Creating...
module.controlplane.libvirt_cloudinit_disk.commoninit[0]: Still creating... [10s elapsed]
module.worker.libvirt_cloudinit_disk.commoninit[0]: Still creating... [10s elapsed]
module.controlplane.libvirt_volume.base-volume-qcow2[0]: Still creating... [10s elapsed]
module.worker.libvirt_volume.base-volume-qcow2[0]: Still creating... [10s elapsed]
module.worker.libvirt_cloudinit_disk.commoninit[1]: Still creating... [10s elapsed]
module.controlplane.libvirt_volume.base-volume-qcow2[0]: Still creating... [20s elapsed]
module.worker.libvirt_cloudinit_disk.commoninit[0]: Still creating... [20s elapsed]
module.controlplane.libvirt_cloudinit_disk.commoninit[0]: Still creating... [20s elapsed]
module.worker.libvirt_volume.base-volume-qcow2[0]: Still creating... [20s elapsed]
module.worker.libvirt_cloudinit_disk.commoninit[1]: Still creating... [20s elapsed]
module.controlplane.libvirt_volume.base-volume-qcow2[0]: Creation complete after 24s [id=/media/virtual-machines/control-plane--base.qcow2]
module.controlplane.libvirt_volume.volume-qcow2[0]: Creating...
module.worker.libvirt_volume.base-volume-qcow2[0]: Still creating... [30s elapsed]
module.controlplane.libvirt_cloudinit_disk.commoninit[0]: Still creating... [30s elapsed]
module.worker.libvirt_cloudinit_disk.commoninit[0]: Still creating... [30s elapsed]
module.worker.libvirt_cloudinit_disk.commoninit[1]: Still creating... [30s elapsed]
module.controlplane.libvirt_volume.volume-qcow2[0]: Still creating... [10s elapsed]
module.worker.libvirt_cloudinit_disk.commoninit[0]: Still creating... [40s elapsed]
module.worker.libvirt_volume.base-volume-qcow2[0]: Still creating... [40s elapsed]
module.controlplane.libvirt_cloudinit_disk.commoninit[0]: Still creating... [40s elapsed]
module.worker.libvirt_cloudinit_disk.commoninit[1]: Still creating... [40s elapsed]
module.controlplane.libvirt_volume.volume-qcow2[0]: Still creating... [20s elapsed]
module.worker.libvirt_volume.base-volume-qcow2[0]: Creation complete after 49s [id=/media/virtual-machines/worker-node--base.qcow2]
module.worker.libvirt_volume.volume-qcow2[1]: Creating...
module.worker.libvirt_volume.volume-qcow2[0]: Creating...
module.worker.libvirt_cloudinit_disk.commoninit[1]: Creation complete after 49s [id=/media/virtual-machines/worker-node-_init02.iso;e3f0f52a-3ced-4ce3-94ef-1c03ef22ae7b]
module.controlplane.libvirt_cloudinit_disk.commoninit[0]: Creation complete after 49s [id=/media/virtual-machines/control-plane-_init01.iso;3b763408-791d-4095-b2a9-9d930b73674b]
module.worker.libvirt_cloudinit_disk.commoninit[0]: Creation complete after 49s [id=/media/virtual-machines/worker-node-_init01.iso;f7f9e4ad-b4c6-42ba-b3dd-1ccade030f60]
module.controlplane.libvirt_volume.volume-qcow2[0]: Creation complete after 25s [id=/media/virtual-machines/control-plane-01.qcow2]
module.controlplane.libvirt_domain.virt-machine[0]: Creating...
module.worker.libvirt_volume.volume-qcow2[1]: Creation complete after 1s [id=/media/virtual-machines/worker-node-02.qcow2]
module.worker.libvirt_volume.volume-qcow2[0]: Creation complete after 1s [id=/media/virtual-machines/worker-node-01.qcow2]
module.worker.libvirt_domain.virt-machine[1]: Creating...
module.worker.libvirt_domain.virt-machine[0]: Creating...
module.controlplane.libvirt_domain.virt-machine[0]: Still creating... [10s elapsed]
module.worker.libvirt_domain.virt-machine[1]: Still creating... [10s elapsed]
module.worker.libvirt_domain.virt-machine[0]: Still creating... [10s elapsed]
module.controlplane.libvirt_domain.virt-machine[0]: Still creating... [20s elapsed]
module.worker.libvirt_domain.virt-machine[1]: Still creating... [20s elapsed]
module.worker.libvirt_domain.virt-machine[0]: Still creating... [20s elapsed]
module.controlplane.libvirt_domain.virt-machine[0]: Still creating... [30s elapsed]
module.worker.libvirt_domain.virt-machine[1]: Still creating... [30s elapsed]
module.worker.libvirt_domain.virt-machine[0]: Still creating... [30s elapsed]
module.controlplane.libvirt_domain.virt-machine[0]: Still creating... [40s elapsed]
module.worker.libvirt_domain.virt-machine[1]: Still creating... [40s elapsed]
module.worker.libvirt_domain.virt-machine[0]: Still creating... [40s elapsed]
module.controlplane.libvirt_domain.virt-machine[0]: Still creating... [50s elapsed]
module.worker.libvirt_domain.virt-machine[1]: Still creating... [50s elapsed]
module.worker.libvirt_domain.virt-machine[0]: Still creating... [50s elapsed]
module.controlplane.libvirt_domain.virt-machine[0]: Still creating... [1m1s elapsed]
module.worker.libvirt_domain.virt-machine[1]: Still creating... [1m0s elapsed]
module.worker.libvirt_domain.virt-machine[0]: Still creating... [1m0s elapsed]
module.controlplane.libvirt_domain.virt-machine[0]: Still creating... [1m11s elapsed]
module.worker.libvirt_domain.virt-machine[1]: Still creating... [1m10s elapsed]
module.worker.libvirt_domain.virt-machine[0]: Still creating... [1m10s elapsed]
module.worker.libvirt_domain.virt-machine[1]: Provisioning with 'remote-exec'...
module.worker.libvirt_domain.virt-machine[1] (remote-exec): Connecting to remote host via SSH...
module.worker.libvirt_domain.virt-machine[1] (remote-exec):   Host: 10.0.1.184
module.worker.libvirt_domain.virt-machine[1] (remote-exec):   User: ubuntu
module.worker.libvirt_domain.virt-machine[1] (remote-exec):   Password: false
module.worker.libvirt_domain.virt-machine[1] (remote-exec):   Private key: true
module.worker.libvirt_domain.virt-machine[1] (remote-exec):   Certificate: false
module.worker.libvirt_domain.virt-machine[1] (remote-exec):   SSH Agent: false
module.worker.libvirt_domain.virt-machine[1] (remote-exec):   Checking Host Key: false
module.worker.libvirt_domain.virt-machine[1] (remote-exec):   Target Platform: unix
module.controlplane.libvirt_domain.virt-machine[0]: Provisioning with 'remote-exec'...
module.controlplane.libvirt_domain.virt-machine[0] (remote-exec): Connecting to remote host via SSH...
module.controlplane.libvirt_domain.virt-machine[0] (remote-exec):   Host: 10.0.1.160
module.controlplane.libvirt_domain.virt-machine[0] (remote-exec):   User: ubuntu
module.controlplane.libvirt_domain.virt-machine[0] (remote-exec):   Password: false
module.controlplane.libvirt_domain.virt-machine[0] (remote-exec):   Private key: true
module.controlplane.libvirt_domain.virt-machine[0] (remote-exec):   Certificate: false
module.controlplane.libvirt_domain.virt-machine[0] (remote-exec):   SSH Agent: false
module.controlplane.libvirt_domain.virt-machine[0] (remote-exec):   Checking Host Key: false
module.controlplane.libvirt_domain.virt-machine[0] (remote-exec):   Target Platform: unix
module.worker.libvirt_domain.virt-machine[0]: Provisioning with 'remote-exec'...
module.worker.libvirt_domain.virt-machine[0] (remote-exec): Connecting to remote host via SSH...
module.worker.libvirt_domain.virt-machine[0] (remote-exec):   Host: 10.0.1.95
module.worker.libvirt_domain.virt-machine[0] (remote-exec):   User: ubuntu
module.worker.libvirt_domain.virt-machine[0] (remote-exec):   Password: false
module.worker.libvirt_domain.virt-machine[0] (remote-exec):   Private key: true
module.worker.libvirt_domain.virt-machine[0] (remote-exec):   Certificate: false
module.worker.libvirt_domain.virt-machine[0] (remote-exec):   SSH Agent: false
module.worker.libvirt_domain.virt-machine[0] (remote-exec):   Checking Host Key: false
module.worker.libvirt_domain.virt-machine[0] (remote-exec):   Target Platform: unix
module.worker.libvirt_domain.virt-machine[1] (remote-exec): Connected!
module.controlplane.libvirt_domain.virt-machine[0] (remote-exec): Connected!
module.worker.libvirt_domain.virt-machine[0] (remote-exec): Connected!
module.controlplane.libvirt_domain.virt-machine[0]: Still creating... [1m21s elapsed]
module.worker.libvirt_domain.virt-machine[1]: Still creating... [1m20s elapsed]
module.worker.libvirt_domain.virt-machine[0]: Still creating... [1m20s elapsed]
module.worker.libvirt_domain.virt-machine[1] (remote-exec): Virtual Machine worker-node-02 is UP!
module.worker.libvirt_domain.virt-machine[1] (remote-exec): Wed Oct 11 15:10:47 UTC 2023
module.worker.libvirt_domain.virt-machine[0] (remote-exec): Virtual Machine worker-node-01 is UP!
module.worker.libvirt_domain.virt-machine[0] (remote-exec): Wed Oct 11 15:10:47 UTC 2023
module.controlplane.libvirt_domain.virt-machine[0] (remote-exec): Virtual Machine control-plane-01 is UP!
module.worker.libvirt_domain.virt-machine[1]: Creation complete after 1m21s [id=32f3ef5a-af61-48a0-b5a5-de7a396e4a3d]
module.controlplane.libvirt_domain.virt-machine[0] (remote-exec): Wed Oct 11 15:10:47 UTC 2023
module.worker.libvirt_domain.virt-machine[0]: Creation complete after 1m21s [id=e44ccc8d-3aa5-4e66-a140-ae3fecf08842]
module.controlplane.libvirt_domain.virt-machine[0]: Creation complete after 1m22s [id=74f1d6f0-c0e3-4d07-9a9a-b70ef9195d34]

Apply complete! Resources: 11 added, 0 changed, 0 destroyed.

Outputs:

control-planes = {
  "ip_address" = [
    "10.0.1.160",
  ]
  "name" = [
    "control-plane-01",
  ]
}
worker-nodes = {
  "ip_address" = [
    "10.0.1.95",
    "10.0.1.184",
  ]
  "name" = [
    "worker-node-01",
    "worker-node-02",
  ]
}

Setup

The terraform code is from my GitHub repository

# Terraform code to stand up infrastructure to build
# an Open Source Kubernetes cluster
#
# Tom Dean
# tom@dean33.com
#
# Last edit 10/11/2023
#
# Based on the Terraform module for KVM/Libvirt Virtual Machine
# https://registry.terraform.io/modules/MonolithProjects/vm/libvirt/1.10.0
# Utilizes the dmacvicar/libvirt Terraform provider

# Let's set some variables!

# Cluster sizing: minimum one of each!
# We can set the number of control plane and worker nodes here

variable "control_plane_nodes" {
  type = number
  default = 1
}

variable "worker_nodes" {
  type = number
  default = 2
}

# Hostname prefixes
# This controls how the hostnames are generated

variable "cp_prefix" {
  type = string
  default = "control-plane-"
}

variable "worker_prefix" {
  type = string
  default = "worker-node-"
}

# Node sizing
# Start with the control planes

variable "cp_cpu" {
  type = number
  default = 2
}

variable "cp_disk" {
  type = number
  default = 25
}

variable "cp_memory" {
  type = number
  default = 8192
}

# On to the worker nodes

variable "worker_cpu" {
  type = number
  default = 2
}

variable "worker_disk" {
  type = number
  default = 25
}

variable "worker_memory" {
  type = number
  default = 8192
}

# Disk Pool to use
# Control Plane

variable "cp_diskpool" {
  type = string
  default = "default"
}

# Worker Nodes

variable "worker_diskpool" {
  type = string
  default = "default"
}

# User / Key information
# Same across all nodes, customize if you wish

variable "privateuser" {
  type = string
  default = "ubuntu"
}

variable "privatekey" {
  type = string
  default = "~/.ssh/id_ed25519"
}

variable "pubkey" {
  type = string
  default = "~/.ssh/id_ed25519.pub"
}

# Other node configuration

variable "timezone" {
  type = string
  default = "CST"
}

variable "osimg" {
  type = string
  default = "https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img"
}

# Set our Terraform provider here
# We're going to use libvirt on our local machine

terraform {
  required_providers {
    libvirt = {
      source = "dmacvicar/libvirt"
    }
  }
}

provider "libvirt" {
  uri = "qemu:///system"
}

# Module for building our control plane nodes

module "controlplane" {
  source  = "MonolithProjects/vm/libvirt"
  version = "1.10.0"

  vm_hostname_prefix = var.cp_prefix
  vm_count    = var.control_plane_nodes
  memory      = var.cp_memory
  vcpu        = var.cp_cpu
  pool        = var.cp_diskpool
  system_volume = var.cp_disk
  dhcp        = true
  ssh_admin   = var.privateuser
  ssh_private_key = var.privatekey
  ssh_keys    = [
    file(var.pubkey),
    ]
  time_zone   = var.timezone
  os_img_url  = var.osimg
}

# Module for building our worker nodes

module "worker" {
  source  = "MonolithProjects/vm/libvirt"
  version = "1.10.0"

  vm_hostname_prefix = var.worker_prefix
  vm_count    = var.worker_nodes
  memory      = var.worker_memory
  vcpu        = var.worker_cpu
  pool        = var.worker_diskpool
  system_volume = var.worker_disk
  dhcp        = true
  ssh_admin   = var.privateuser
  ssh_private_key = var.privatekey
  ssh_keys    = [
    file(var.pubkey),
    ]
  time_zone   = var.timezone
  os_img_url  = var.osimg
}

# Outputs

output "control-planes" {
  value = module.controlplane
}

output "worker-nodes" {
  value = module.worker
}

Steps to Reproduce Issue

terraform init
terraform plan -out cka-plan.plan
terraform apply cka-plan.plan

Additional information:

Do you have SELinux or Apparmor/Firewall enabled? Some special configuration?

No

Have you tried to reproduce the issue without them enabled?

No, the issue goes away with older provider versions on the same system.

rgl commented 8 months ago

maybe, is this because of this timeouts change?

https://github.com/dmacvicar/terraform-provider-libvirt/commit/01e507d95b8b49fb6d94b8ecde7e34ee4cb1c567#diff-ec1bf6b27daed6653f7d487358b1b3b311e60427f79397bc63e6c74eecc3667eL70-R72

PS: Its not that. Here's the log, with my added XXX debug messages, where you can see, the first shows when it started to polll, and the second, when it failed witn an error; there is only about 5s difference between them, which means the waitFunc gives up too soon:

2023-10-14T13:27:07.154+0100 [INFO]  provider.terraform-provider-libvirt: 2023/10/14 13:27:07 [INFO] Domain ID: b76f6562-6235-40a2-8575-2b8eb2b1d061: timestamp=2023-10-14T13:27:07.154+0100
2023-10-14T13:27:07.154+0100 [INFO]  provider.terraform-provider-libvirt: 2023/10/14 13:27:07 [DEBUG] XXX timeout=5m0s resourceStateMinTimeout=3000000000 resourceStateDelay=5000000000: timestamp=2023-10-14T13:27:07.154+0100
2023-10-14T13:27:07.154+0100 [INFO]  provider.terraform-provider-libvirt: 2023/10/14 13:27:07 [DEBUG] Waiting for state to become: [all-addresses-obtained]: timestamp=2023-10-14T13:27:07.154+0100
2023-10-14T13:27:10.175+0100 [TRACE] dag/walk: vertex "provider[\"registry.terraform.io/dmacvicar/libvirt\"] (close)" is waiting for "libvirt_domain.example"
2023-10-14T13:27:10.175+0100 [TRACE] dag/walk: vertex "root" is waiting for "output.ip (expand)"
2023-10-14T13:27:10.175+0100 [TRACE] dag/walk: vertex "output.ip (expand)" is waiting for "libvirt_domain.example"
2023-10-14T13:27:12.156+0100 [INFO]  provider.terraform-provider-libvirt: 2023/10/14 13:27:12 [DEBUG] waiting for network address for iface=52:54:00:5A:F1:B9: timestamp=2023-10-14T13:27:12.155+0100
2023-10-14T13:27:12.156+0100 [INFO]  provider.terraform-provider-libvirt: 2023/10/14 13:27:12 [DEBUG] qemu-agent used to query interface info: timestamp=2023-10-14T13:27:12.156+0100
2023-10-14T13:27:12.156+0100 [INFO]  provider.terraform-provider-libvirt: 2023/10/14 13:27:12 [DEBUG] XXX timeout=5m0s resourceStateMinTimeout=3000000000 resourceStateDelay=5000000000 err=error retrieving interface addresses: error retrieving interface addresses: Guest agent is not responding: QEMU guest agent is not connected: timestamp=2023-10-14T13:27:12.156+0100
2023-10-14T13:27:12.156+0100 [TRACE] provider.terraform-provider-libvirt: Called downstream: tf_resource_type=libvirt_domain tf_rpc=ApplyResourceChange @caller=/home/vagrant/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.24.1/helper/schema/resource.go:838 tf_req_id=9163b083-d4ce-aca8-76fa-0afbebd16616 @module=sdk.helper_schema tf_provider_addr=provider timestamp=2023-10-14T13:27:12.156+0100
2023-10-14T13:27:12.157+0100 [TRACE] provider.terraform-provider-libvirt: Received downstream response: diagnostic_warning_count=0 tf_proto_version=5.3 tf_provider_addr=provider tf_req_id=9163b083-d4ce-aca8-76fa-0afbebd16616 tf_resource_type=libvirt_domain tf_rpc=ApplyResourceChange @caller=/home/vagrant/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.14.2/tfprotov5/internal/tf5serverlogging/downstream_request.go:37 @module=sdk.proto diagnostic_error_count=1 tf_req_duration_ms=6785 timestamp=2023-10-14T13:27:12.157+0100
2023-10-14T13:27:12.157+0100 [ERROR] provider.terraform-provider-libvirt: Response contains error diagnostic: @caller=/home/vagrant/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.14.2/tfprotov5/internal/diag/diagnostics.go:55 diagnostic_summary="couldn't retrieve IP address of domain id: b76f6562-6235-40a2-8575-2b8eb2b1d061. Please check following: 
1) is the domain running properly? 
2) has the network interface an IP address? 
3) Networking issues on your libvirt setup? 
 4) is DHCP enabled on this Domain's network? 
5) if you use bridge network, the domain should have the pkg qemu-agent installed 
IMPORTANT: This error is not a terraform libvirt-provider error, but an error caused by your KVM/libvirt infrastructure configuration/setup 
 error retrieving interface addresses: error retrieving interface addresses: Guest agent is not responding: QEMU guest agent is not connected" tf_provider_addr=provider tf_rpc=ApplyResourceChange tf_req_id=9163b083-d4ce-aca8-76fa-0afbebd16616 tf_resource_type=libvirt_domain @module=sdk.proto diagnostic_detail= diagnostic_severity=ERROR tf_proto_version=5.3 timestamp=2023-10-14T13:27:12.157+0100

PPS It fails at this call-site:

https://github.com/dmacvicar/terraform-provider-libvirt/blob/v0.7.4/libvirt/domain.go#L48-L51

with the error:

error retrieving interface addresses: error retrieving interface addresses: Guest agent is not responding: QEMU guest agent is not connected

The fix is at https://github.com/dmacvicar/terraform-provider-libvirt/pull/1039

southsidedean commented 8 months ago

Thanks for the update!

momoah commented 8 months ago

Hello, Having the same issue with my bridge interface (no problem with the virtual bridge created by libvirt). Does this mean we need to wait for a 0.7.5 or can we just build it from source?

thequailman commented 7 months ago

You can work around this by setting the version of the plugin to 0.7.1.

momoah commented 7 months ago

You can work around this by setting the version of the plugin to 0.7.1.

Superb! That worked a treat! I'll stick with this version until this is fixed in later versions.

kmasaryk commented 7 months ago

Setting the version to 0.7.1 appears to fix the immediate problem with QEMU agent communication since Terraform isn't throwing an error now when qemu_agent = true but it's still unable to pull the IP for me. Appears to be due to the issue described in #1047 which is potentially fixed in the PR #1048.

Performing a terraform destroy does produce valid addresses in the output so the communication is there but looks like wait_for_lease = true isn't actually waiting for an IP address to be assigned once the interface comes up.