Closed shyce closed 7 months ago
After reviewing the logs, I've observed that the VMID specified in my configuration (starting from 200) isn't being respected during the VM creation process. The logs indicate that Terraform queries for next available VMIDs much lower than 200, suggesting a potential issue with how VMIDs are determined or an oversight in my configuration.
Additionally, despite the logs showing successful API interactions (200 OK responses) and the Proxmox provider seemingly proceeding with the setup, there's no evidence that VM creation attempts are actually being made. This is confusing because I don't observe any errors or failed attempts in the logs; it's as if the creation process stalls or never initiates beyond the preliminary API calls.
Hello,
I met the same problem.
The follow configuration works for me:
terraform {
required_providers {
# Proxmox
proxmox = {
source = "Telmate/proxmox"
version = "3.0.1-rc1"
}
}
required_version = ">= 1.7.1"
}
provider "proxmox" {
pm_api_url = "https://192.168.2.150:8006/api2/json"
pm_user = "<user>"
pm_password = "<password>"
pm_debug = true
pm_log_levels = {
_default = "debug"
_capturelog = ""
}
}
resource "proxmox_vm_qemu" "postgre" {
# count = 1
vmid = 100
name = "postgre"
desc = "PostgreSQL host node"
# Valid nodes: proxmox-1, proxmox-2
target_node = "proxmox-1"
clone = "ubuntu-2204-cloudinit"
os_type = "ubuntu"
# Default to bios is seabios
bios = "seabios"
cores = 32
sockets = 1
cpu = "EPYC-v3"
memory = 131072
scsihw = "virtio-scsi-pci"
bootdisk = "scsi0"
# Disk settings
disks {
scsi {
scsi0 {
disk {
size = 2048
storage = "local-lvm"
}
}
}
}
# Network settings
network {
model = "virtio"
bridge = "vmbr0"
}
ipconfig0 = "gw=192.168.2.1,ip=192.168.2.154/24"
# Specify the cloud-init cdrom storage
cloudinit_cdrom_storage = "local-lvm"
# User settings
ciuser = "testuser"
cipassword = "123456"
sshkeys = file("/root/.ssh/id_rsa.pub")
}
The configuration works if the agent is not set, but there will be a warning and I cannot destroy it with terraform destroy
:
proxmox_vm_qemu.postgre: Creation complete after 31s [id=proxmox-1/qemu/100]
2024-02-01T15:12:20.078+0800 [DEBUG] State storage *statemgr.Filesystem declined to persist a state snapshot
2024-02-01T15:12:20.079+0800 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2024-02-01T15:12:20.086+0800 [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.terraform.io/telmate/proxmox/3.0.1-rc1/linux_amd64/terraform-provider-proxmox_v3.0.1-rc1 pid=419832
2024-02-01T15:12:20.086+0800 [DEBUG] provider: plugin exited
╷
│ Warning: Qemu Guest Agent support is disabled from proxmox config.
│
│ with proxmox_vm_qemu.postgre,
│ on main.tf line 29, in resource "proxmox_vm_qemu" "postgre":
│ 29: resource "proxmox_vm_qemu" "postgre" {
│
│ Qemu Guest Agent support is required to make communications with the VM
╵
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
terraform destroy
will stuck at:
2024-02-01T15:21:57.190+0800 [INFO] provider.terraform-provider-proxmox_v3.0.1-rc1: 2024/02/01 15:21:57 [DEBUG][initConnInfo] check ip result error 500 QEMU guest agent is not running: timestamp="2024-02-01T15:21:57.190+0800"
2024-02-01T15:22:02.195+0800 [INFO] provider.terraform-provider-proxmox_v3.0.1-rc1: 2024/02/01 15:22:02 >>>>>>>>>> REQUEST:
GET /api2/json/nodes/proxmox-1/qemu/100/agent/network-get-interfaces HTTP/1.1
Host: 192.168.2.150:8006
User-Agent: Go-http-client/1.1
Accept: application/json
Authorization: PVEAuthCookie=PVE:terraform-prov@pve:65BB4679::J8jzwPehBoiyX8JU+T2xJW7wmr6JK8vdDCNvD64lFxwd6xX0b6wV4ERAtjOK1jDKrk4jop31IbDLy1lzfmtnRPqKLAKnUqG/D9SyqrLdr8hzS2EVG/MWrcIDC10UrmADAiKC5hJ9iUUPSy3YK4ELM55Ombs9FL+6iEPv+QS8OjuQzQsishDrWhagPVSmLovrgJmHoceUtC6rp7VY/nf376ygUnyqaaWua3yiaH0xCRzzkbsPMxw4om66Fbu3pxdUuHlndPcEaSw1zaOXOyKH9TQO8Q4m8rHF9xCNeObYEzzYnnMy44p7HFCcJJ2sXiaoJ2/gxDdnrZHfR7p9IIoJCg==
CSRFPreventionToken: 65BB4679:LTTji15BKgB4jGql8am3z5ktRAh3cRFWCTE9D/NehUw
Accept-Encoding: gzip
: timestamp="2024-02-01T15:22:02.195+0800"
2024-02-01T15:22:05.215+0800 [INFO] provider.terraform-provider-proxmox_v3.0.1-rc1: 2024/02/01 15:22:05 <<<<<<<<<< RESULT:
HTTP/1.1 500 QEMU guest agent is not running
Connection: close
Content-Length: 13
Cache-Control: max-age=0
Content-Type: application/json;charset=UTF-8
Date: Thu, 01 Feb 2024 07:22:05 GMT
Expires: Thu, 01 Feb 2024 07:22:05 GMT
Pragma: no-cache
Server: pve-api-daemon/3.0
{"data":null}: timestamp="2024-02-01T15:22:05.215+0800"
2024-02-01T15:22:05.215+0800 [INFO] provider.terraform-provider-proxmox_v3.0.1-rc1: 2024/02/01 15:22:05 [DEBUG][initConnInfo] check ip result error 500 QEMU guest agent is not running: timestamp="2024-02-01T15:22:05.215+0800"
2024-02-01T15:22:10.220+0800 [INFO] provider.terraform-provider-proxmox_v3.0.1-rc1: 2024/02/01 15:22:10 >>>>>>>>>> REQUEST:
If the agent is set to 1 the terraform apply
will stuck at:
2024-02-01T15:24:31.062+0800 [INFO] provider.terraform-provider-proxmox_v3.0.1-rc1: 2024/02/01 15:24:31 [DEBUG][initConnInfo] check ip result error 500 QEMU guest agent is not running: timestamp="2024-02-01T15:24:31.061+0800"
2024-02-01T15:24:36.066+0800 [INFO] provider.terraform-provider-proxmox_v3.0.1-rc1: 2024/02/01 15:24:36 >>>>>>>>>> REQUEST:
GET /api2/json/nodes/proxmox-1/qemu/100/agent/network-get-interfaces HTTP/1.1
Host: 192.168.2.150:8006
User-Agent: Go-http-client/1.1
Accept: application/json
Authorization: PVEAuthCookie=PVE:terraform-prov@pve:65BB470C::Bpd6DQNC/ato3DSqjuhe8GYTUedlc2MctfE2oW/1Ldze8eeKZo4QeqKnFPF3T9alBhlqZebpCnmJyU2HIb+62d2T6WOUPynx1NAd1SOn00c65vJcWj9FgX5W7juaM+2p/lMAMomw7lwHbiKm5vszY5t9bZ94jB5ezaXYo0ZA8ssq7wmkt73Ov5tHlBeQVoTbb7kuQz3CPWYqfKvzjqmc5Ci7lpPOWAgvk45ZD71PQEllkVW/Z3o5+rg2G+e8X2GHRG7YrsYPmgHRIclK145pqWnY270M+5UiAiC3V0Z2k4jU8YPGEkBBhJ9/t5VXnk8VSYD1Jbi4UehxA60ZswmVWg==
CSRFPreventionToken: 65BB470C:MSnspjD2luV3BgKtjrt0FsCuleoLDpoG8GF8EEWIJSk
Accept-Encoding: gzip
: timestamp="2024-02-01T15:24:36.066+0800"
proxmox_vm_qemu.postgre: Still creating... [40s elapsed]
2024-02-01T15:24:39.083+0800 [INFO] provider.terraform-provider-proxmox_v3.0.1-rc1: 2024/02/01 15:24:39 <<<<<<<<<< RESULT:
HTTP/1.1 500 QEMU guest agent is not running
Connection: close
Content-Length: 13
Cache-Control: max-age=0
Content-Type: application/json;charset=UTF-8
Date: Thu, 01 Feb 2024 07:24:39 GMT
Expires: Thu, 01 Feb 2024 07:24:39 GMT
Pragma: no-cache
Server: pve-api-daemon/3.0
{"data":null}: timestamp="2024-02-01T15:24:39.083+0800"
2024-02-01T15:24:39.083+0800 [INFO] provider.terraform-provider-proxmox_v3.0.1-rc1: 2024/02/01 15:24:39 [DEBUG][initConnInfo] check ip result error 500 QEMU guest agent is not running: timestamp="2024-02-01T15:24:39.083+0800"
2024-02-01T15:24:44.088+0800 [INFO] provider.terraform-provider-proxmox_v3.0.1-rc1: 2024/02/01 15:24:44 >>>>>>>>>> REQUEST:
The terraform apply
only stucks at network-get-interfaces
check since the virtual machine is created:
root@proxmox-1:~/Work/tpt-terraform-scripts/proxmox# qm list
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID
100 postgre running 131072 2048.00 422311
1000 ubuntu-2204-cloudinit stopped 2048 2.20 0
1001 ubuntu-2204-instance-01 stopped 2048 2.20 0
Sorry, the ubuntu cloudimg does not integrate the qemu-guest-agent.
I ran sudo apt install -y qemu-guest-agent && sudo systemctl start qemu-guest-agent
in the virtual machine created with terraform through the vnc console in the Proxmox Web UI. And then, the terrafrom destroy
works.
It seems like that I need to execute these two commands in the cloud init stage.
Thank you! I got further but I'm still stuck. It creates the machines now if I set cloudinit_cdrom_storage
to the proper value and also set agent
to 0
(only on creation or it gets stuck at "Still creating..." I can then switch it to 1
after creation for stuff like destroy and it works fine. The problem is it creates a secondary ide drive.
If I try to delete the original ide2 drive, it starts to boot the cloud-init clone from ide3 but then says:
some time passes...
wget https://cloud-images.ubuntu.com/mantic/current/mantic-server-cloudimg-amd64.img
apt-get install libguestfs-tools
virt-customize -a mantic-server-cloudimg-amd64.img \
--run-command 'apt-get update' \
--install software-properties-common,qemu-guest-agent,wget,curl,git \
--run-command 'systemctl enable qemu-guest-agent' \
--run-command 'systemctl start qemu-guest-agent' \
--run-command 'systemctl enable fstrim.timer' \
--run-command 'systemctl start fstrim.timer' \
--run-command 'add-apt-repository -y ppa:neovim-ppa/stable' \
--run-command 'apt-get update' \
--install neovim
# eterna = zfs storage
# 9000 = vmid
qm create 9000 --memory 2048 --name mantic --net0 virtio,bridge=vmbr0
qm importdisk 9000 mantic-server-cloudimg-amd64.img eterna
qm set 9000 --scsihw virtio-scsi-pci --scsi0 eterna:vm-9000-disk-0
qm set 9000 --ide2 eterna:cloudinit
qm set 9000 --boot c --bootdisk scsi0
qm set 9000 --serial0 socket --vga serial0
qm set 9000 -template 1
resource "proxmox_vm_qemu" "anima" {
count = var.clusterSize
name = "${var.prefix}${count.index + 1}"
vmid = var.vmidStartRange + count.index
desc = var.mdClusterDescription
target_node = var.targetNode
clone = var.cloudTemplate
os_type = "ubuntu"
cpu = "host"
cores = var.cores
sockets = var.sockets
memory = var.memory
agent = var.qemuAgent
disks {
scsi {
scsi0 {
disk {
storage = var.storageDevice
size = var.diskSize
emulatessd = var.ssd
discard = true
}
}
}
}
network {
bridge = "vmbr0"
model = "virtio"
}
ipconfig0 = "ip=${var.networkID}.${count.index + var.networkIPStartRange}${var.networkCIDR},gw=${var.networkID}.1"
nameserver = "${var.networkID}.1"
searchdomain = "home.arpa"
cloudinit_cdrom_storage = "eterna"
ciuser = var.username
cipassword = var.password
sshkeys = var.sshKeys
}
proxmoxURL = "https://<domain>:8006/api2/json"
pmUser = "terraform-prov@pve!anima-cluster"
pmTokenSecret = "<secret>"
cloudTemplate = "mantic"
prefix = "anima-"
clusterSize = 6
cores = 4
sockets = 1
memory = 8192
# If I do not set qemuAgent to 0 here on first creation, machines will be stuck at "Still creating..."
# even though the qemu-guest-agent service is slipstreamed and enabled. I must only enable it after the machines
# are created which is annoying. It works fine after creation to destroy and such.
qemuAgent = 0
targetNode = "pve"
networkID = "10.0.0"
networkCIDR = "/24"
networkIPStartRange = 9
vmidStartRange = 200
storageDevice = "eterna"
diskSize = 100
ssd = true
username = "<user"
password = "<pass>"
sshKeys = <<-EOT
ssh-ed25519 AAAA...
ssh-ed25519 AAAA...
EOT
mdClusterDescription = <<-EOF
# Home Lab Cluster
EOF
variable "pmUser" {
type = string
}
variable "pmTokenSecret" {
type = string
}
variable "proxmoxURL" {
type = string
}
variable "cloudTemplate" {
type = string
}
variable "prefix" {
type = string
default = "cloud-cluster-"
}
variable "clusterSize" {
type = number
default = 3
}
variable "cores" {
type = number
default = 2
}
variable "sockets" {
type = number
default = 1
}
variable "memory" {
type = number
default = 4096
}
variable "qemuAgent" {
type = number
default = 1
}
variable "targetNode" {
type = string
default = "pve"
}
variable "networkID" {
type = string
default = "192.168.0"
}
variable "networkCIDR" {
type = string
default = "/24"
}
variable "networkIPStartRange" {
type = number
default = 150
}
variable "vmidStartRange" {
type = number
default = 1000
}
variable "storageDevice" {
type = string
}
variable "diskType" {
type = string
default = "scsi"
}
variable "diskSize" {
type = string
default = "50G"
}
variable "ssd" {
type = string
default = 0
}
variable "username" {
type = string
default = "admin"
}
variable "password" {
type = string
default = "kube"
}
variable "sshKeys" {
type = string
default = ""
}
variable "mdClusterDescription" {
type = string
default = <<EOF
# Home Lab Cluster
EOF
}
This looks like a bug in the handling of the "lock" that limits limit the amount of parallel requests to the Proxmox API. This is controlled by pm_parallel
in the provider config. If you set pm_parallel
to a value larger than the number of resources you're trying to create, this effectively disables the artificial limit and should let terraform continue the creation process.
Mine config looked almost the same before version 3.0.1 when I was using 2.9.3 with Proxmox 7.1. Upgraded to 8.1.3 and got some issues when running terraform with the provider version 2.9.3.. so the next plan was change provider to this one 3.0.1. That gave me resolved issues with the code. I don't remember but there were some issues with string in network rather int and so on. Can't remember. Irrelevant because 3.0.1 fixed that. But.. I stuck on your problem which you are facing "Still creating.." but with small detail related that cloud-init drive is removed during deployment and can't figure it why. And I'm tired of it. Every time a deploy it cloudinit disappear during the process.
Repo is here, I don't wanna put all content of those files here
The most important is the null_resource for creating config files
null_resource.cloud_init_config_files[0]: Creating...
2024-02-05T18:43:33.699+0100 [INFO] Starting apply for null_resource.cloud_init_config_files[0]
null_resource.cloud_init_network-config_files[0]: Creating...
2024-02-05T18:43:33.699+0100 [INFO] Starting apply for null_resource.cloud_init_network-config_files[0]
2024-02-05T18:43:33.699+0100 [DEBUG] null_resource.cloud_init_config_files[0]: applying the planned Create change
2024-02-05T18:43:33.699+0100 [DEBUG] null_resource.cloud_init_network-config_files[0]: applying the planned Create change
null_resource.cloud_init_network-config_files[0]: Provisioning with 'file'...
null_resource.cloud_init_config_files[0]: Provisioning with 'file'...
2024-02-05T18:43:33.701+0100 [DEBUG] Connecting to 10.0.1.1:22 for SSH
2024-02-05T18:43:33.701+0100 [DEBUG] Connecting to 10.0.1.1:22 for SSH
2024-02-05T18:43:33.705+0100 [DEBUG] Connection established. Handshaking for user root
2024-02-05T18:43:33.705+0100 [DEBUG] Connection established. Handshaking for user root
2024-02-05T18:43:33.784+0100 [DEBUG] Telling SSH config to forward to agent
2024-02-05T18:43:33.784+0100 [DEBUG] Setting up a session to request agent forwarding
2024-02-05T18:43:33.789+0100 [DEBUG] Telling SSH config to forward to agent
2024-02-05T18:43:33.789+0100 [DEBUG] Setting up a session to request agent forwarding
2024-02-05T18:43:33.833+0100 [INFO] agent forwarding enabled
2024-02-05T18:43:33.833+0100 [DEBUG] starting ssh KeepAlives
2024-02-05T18:43:33.833+0100 [DEBUG] opening new ssh session
2024-02-05T18:43:33.841+0100 [INFO] agent forwarding enabled
2024-02-05T18:43:33.841+0100 [DEBUG] starting ssh KeepAlives
2024-02-05T18:43:33.841+0100 [DEBUG] Starting remote scp process: 'scp' -vt /var/lib/vz/snippets
2024-02-05T18:43:33.841+0100 [DEBUG] opening new ssh session
2024-02-05T18:43:33.844+0100 [DEBUG] Started SCP session, beginning transfers...
2024-02-05T18:43:33.844+0100 [DEBUG] Beginning file upload...
2024-02-05T18:43:33.844+0100 [DEBUG] Starting remote scp process: 'scp' -vt /var/lib/vz/snippets
2024-02-05T18:43:33.848+0100 [DEBUG] SCP session complete, closing stdin pipe.
2024-02-05T18:43:33.848+0100 [DEBUG] Waiting for SSH session to complete.
2024-02-05T18:43:33.853+0100 [DEBUG] Started SCP session, beginning transfers...
2024-02-05T18:43:33.853+0100 [DEBUG] Beginning file upload...
2024-02-05T18:43:33.853+0100 [ERROR] scp stderr: "Sink: C0644 1899 user-data_vm-srv-app-1.yaml\nscp: debug1: fd 0 clearing O_NONBLOCK\r\n"
null_resource.cloud_init_config_files[0]: Creation complete after 0s [id=8398213386398904967]
2024-02-05T18:43:33.856+0100 [DEBUG] SCP session complete, closing stdin pipe.
2024-02-05T18:43:33.856+0100 [DEBUG] Waiting for SSH session to complete.
2024-02-05T18:43:33.858+0100 [DEBUG] State storage *statemgr.Filesystem declined to persist a state snapshot
2024-02-05T18:43:33.861+0100 [ERROR] scp stderr: "Sink: C0644 306 network-config_vm-srv-app-1.yaml\nscp: debug1: fd 0 clearing O_NONBLOCK\r\n"
null_resource.cloud_init_network-config_files[0]: Creation complete after 0s [id=4576244245995302674]
2024-02-05T18:43:33.866+0100 [DEBUG] State storage *statemgr.Filesystem declined to persist a state snapshot
2024-02-05T18:43:33.866+0100 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2024-02-05T18:43:33.867+0100 [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.terraform.io/hashicorp/null/3.2.2/darwin_arm64/terraform-provider-null_v3.2.2_x5 pid=36107
2024-02-05T18:43:33.867+0100 [DEBUG] provider: plugin exited
2024-02-05T18:43:33.934+0100 [WARN] Provider "registry.terraform.io/telmate/proxmox" produced an invalid plan for proxmox_vm_qemu.cloudinit["srv-app-1"], but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .oncreate: planned value cty.False for a non-computed attribute
- .force_create: planned value cty.False for a non-computed attribute
- .define_connection_info: planned value cty.True for a non-computed attribute
- .preprovision: planned value cty.True for a non-computed attribute
- .bios: planned value cty.StringVal("seabios") for a non-computed attribute
- .onboot: planned value cty.False for a non-computed attribute
- .tablet: planned value cty.True for a non-computed attribute
- .cpu: planned value cty.StringVal("host") for a non-computed attribute
- .vlan: planned value cty.NumberIntVal(-1) for a non-computed attribute
- .clone_wait: planned value cty.NumberIntVal(10) for a non-computed attribute
- .additional_wait: planned value cty.NumberIntVal(5) for a non-computed attribute
- .balloon: planned value cty.NumberIntVal(0) for a non-computed attribute
- .hotplug: planned value cty.StringVal("network,disk,usb") for a non-computed attribute
- .kvm: planned value cty.True for a non-computed attribute
- .vcpus: planned value cty.NumberIntVal(0) for a non-computed attribute
- .full_clone: planned value cty.True for a non-computed attribute
- .vm_state: planned value cty.StringVal("running") for a non-computed attribute
- .guest_agent_ready_timeout: planned value cty.NumberIntVal(100) for a non-computed attribute
- .automatic_reboot: planned value cty.True for a non-computed attribute
- .smbios: attribute representing nested block must not be unknown itself; set nested attribute values to unknown instead
- .disks[0].scsi[0].scsi0[0].disk[0].backup: planned value cty.True for a non-computed attribute
- .disks[0].scsi[0].scsi0[0].disk[0].mbps_r_burst: planned value cty.NumberIntVal(0) for a non-computed attribute
- .disks[0].scsi[0].scsi0[0].disk[0].format: planned value cty.StringVal("raw") for a non-computed attribute
- .disks[0].scsi[0].scsi0[0].disk[0].iops_r_burst: planned value cty.NumberIntVal(0) for a non-computed attribute
- .disks[0].scsi[0].scsi0[0].disk[0].mbps_r_concurrent: planned value cty.NumberIntVal(0) for a non-computed attribute
- .disks[0].scsi[0].scsi0[0].disk[0].mbps_wr_burst: planned value cty.NumberIntVal(0) for a non-computed attribute
- .disks[0].scsi[0].scsi0[0].disk[0].iops_wr_burst: planned value cty.NumberIntVal(0) for a non-computed attribute
- .disks[0].scsi[0].scsi0[0].disk[0].iops_r_burst_length: planned value cty.NumberIntVal(0) for a non-computed attribute
- .disks[0].scsi[0].scsi0[0].disk[0].mbps_wr_concurrent: planned value cty.NumberIntVal(0) for a non-computed attribute
- .disks[0].scsi[0].scsi0[0].disk[0].iops_wr_burst_length: planned value cty.NumberIntVal(0) for a non-computed attribute
- .disks[0].scsi[0].scsi0[0].disk[0].iops_wr_concurrent: planned value cty.NumberIntVal(0) for a non-computed attribute
- .disks[0].scsi[0].scsi0[0].disk[0].iops_r_concurrent: planned value cty.NumberIntVal(0) for a non-computed attribute
- .network[0].firewall: planned value cty.False for a non-computed attribute
- .network[0].link_down: planned value cty.False for a non-computed attribute
- .network[0].tag: planned value cty.NumberIntVal(-1) for a non-computed attribute
proxmox_vm_qemu.cloudinit["srv-app-1"]: Creating...
2024-02-05T18:43:33.934+0100 [INFO] Starting apply for proxmox_vm_qemu.cloudinit["srv-app-1"]
2024-02-05T18:43:33.939+0100 [DEBUG] proxmox_vm_qemu.cloudinit["srv-app-1"]: applying the planned Create change
2024-02-05T18:43:33.948+0100 [INFO] provider.terraform-provider-proxmox_v3.0.1-rc1: 2024/02/05 18:43:33 [DEBUG] setting computed for "smbios" from ComputedKeys: timestamp=2024-02-05T18:43:33.948+0100
2024-02-05T18:43:33.948+0100 [INFO] provider.terraform-provider-proxmox_v3.0.1-rc1: 2024/02/05 18:43:33 [DEBUG] setting computed for "unused_disk" from ComputedKeys: timestamp=2024-02-05T18:43:33.948+0100
2024-02-05T18:43:33.948+0100 [INFO] provider.terraform-provider-proxmox_v3.0.1-rc1: 2024/02/05 18:43:33 [DEBUG][QemuVmCreate] checking for duplicate name: srv-app-1: timestamp=2024-02-05T18:43:33.948+0100
2024-02-05T18:43:33.965+0100 [INFO] provider.terraform-provider-proxmox_v3.0.1-rc1: 2024/02/05 18:43:33 [DEBUG][QemuVmCreate] cloning VM: timestamp=2024-02-05T18:43:33.965+0100
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [10s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [20s elapsed]
2024-02-05T18:44:00.128+0100 [INFO] provider.terraform-provider-proxmox_v3.0.1-rc1: 2024/02/05 18:44:00 [DEBUG][QemuVmCreate] update VM after clone: timestamp=2024-02-05T18:44:00.127+0100
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [30s elapsed]
proxmox_vm_qemu.cloudinit["srv-app-1"]: Still creating... [40s elapsed]
2024-02-05T18:44:19.826+0100 [INFO] provider.terraform-provider-proxmox_v3.0.1-rc1: 2024/02/05 18:44:19 [DEBUG][QemuVmCreate] starting VM: timestamp=2024-02-05T18:44:19.825+0100
2024-02-05T18:44:21.892+0100 [INFO] provider.terraform-provider-proxmox_v3.0.1-rc1: 2024/02/05 18:44:21 [DEBUG][QemuVmCreate] vm creation done!: timestamp=2024-02-05T18:44:21.892+0100
2024-02-05T18:44:21.956+0100 [INFO] provider.terraform-provider-proxmox_v3.0.1-rc1: 2024/02/05 18:44:21 [DEBUG] VM status: running: timestamp=2024-02-05T18:44:21.955+0100
2024-02-05T18:44:21.956+0100 [INFO] provider.terraform-provider-proxmox_v3.0.1-rc1: 2024/02/05 18:44:21 [DEBUG] VM is running, checking the IP: timestamp=2024-02-05T18:44:21.956+0100
2024-02-05T18:44:21.956+0100 [INFO] provider.terraform-provider-proxmox_v3.0.1-rc1: 2024/02/05 18:44:21 [INFO][initConnInfo] trying to get vm ip address for provisioner: timestamp=2024-02-05T18:44:21.956+0100
2024-02-05T18:44:21.956+0100 [INFO] provider.terraform-provider-proxmox_v3.0.1-rc1: 2024/02/05 18:44:21 [DEBUG][initConnInfo] retrying for at most 20m0s minutes before giving up: timestamp=2024-02-05T18:44:21.956+0100
I don't know how to put that damn cloudinit storage when creating the templated. Some of tutorials like this one https://gist.github.com/chriswayg/b6421dcc69cb3b7e41f2998f1150e1df are importing the disk from local
qm importdisk 9110 debian-10.0.2-20190721-openstack-amd64.qcow2 local -format qcow2
which is likely impossible because storage local does not support vm images
Some of you here doesn't even use cloudinit_cdrom_storage
and some of you are using it although you don't supposed do that when not using cicustom
according to tf provider docs. I was using official proxmox docs to create template https://pve.proxmox.com/wiki/Cloud-Init_Support and prepared ansible scripts with command
module and additional conditional so it can run all over again on already created templates by skipping task (command & shell don't provide idempotency)
So first of all I'm running this script to prepare new templates. And previous code which I shared with you is deploying VMs based on that templates (the one I declare in clone
argument). Every time running thes damn script everything goes well until suddenly it maks some weird operations on cloudinit and removes it so I have no cloudinit in the VM.
@electropolis Currently in 3.0.1-rc1
cloudinit_cdrom_storage
along with any ci setting like ciuser
must be set to add a cloud-init
disk. Gonna update the docs.
@electropolis Currently in
3.0.1-rc1
cloudinit_cdrom_storage
along with any ci setting likeciuser
must be set to add acloud-init
disk. Gonna update the docs.
Awesome. The cloudinit resource docs have also some old probably attrubutes like disk
or volume
. And regarding the problem with missing cloudinit I should refer to mentioned issue and wait there for results ?
@electropolis if you see any more issues/inconsistencies could you put them here? #932
@electropolis Currently in
3.0.1-rc1
cloudinit_cdrom_storage
along with any ci setting likeciuser
must be set to add acloud-init
disk. Gonna update the docs.
by the way you know that right now setting cloudinit disk doesn't work at all
I know this issue has been closed but would like to share my experience. I have encountered the same issue running 3.0.1-rc1
where creating multiple VM causing terraform to stuck at Still creating...
. It turns out to be that qemu-guest-agent
needs to be started in the guest VM. When the VM completed its provisioning, the Proxmox UI still says Guest is not running
. I ssh in to the VM and ran systemctl start qemu-guest-agent
to force the agent to start and Terraform immediately complete with Creation complete after xxx
. I then included in my cloud-init.yml
to make sure that at the end of runcmd
, i always execute start qemu agent. That seems to solve my issue. I'm now able to provision Proxmox VM without much of an issue.
My config is below, I can't seem to get past "Still creating..." with agent set to both 1 or 0. I tried it with the root user and token user described in the install guide as well. I don't see anything happen in the UI or any requests being made. I do see requests going out and getting 200 OK with:
I'm using the following: