Closed hagaram closed 1 week ago
When using resource directly its ok, but when trying to pass it from variable to module and create the disks block - new structure is horrible to work with. It would be nice to include it in documentation. Yes, its a little out of scope of a provider, but I believe it would really help some people, including me.
Hi, Today I tried to write a terraform module for the new version of the provider, but, unfortunately, I could not bypass all types of disks through dynamic, since it does not know how to work with dynamic variables. Tell me, are there any solutions to be able to remove this lamp into variables? This cannot currently be used to create terraform modules.
You can take a look at the vm_qemu.md
file in that merge request you linked (#892). The diff shows the new ways to configure disks against the old. Is that the kind of thing you're looking for?
@clincha Unfortunatelly, thats not the solution I, and I think neither @Korney95 are looking for.
When you use resource only, as shown in the example you posted - it works as intended, without issues.
But if you want to wrap the provider into a module - for whatever reason (I personally create DNS entries and entries in Netbox along with the VM). It is absolutely impossible to pass or somehow construct the disks
block form some kind of variable.
disks is a block, not a variable to which you would assign another variable from module instance. Thats what makes this hard to do. What Im trying to do is something like this (non working example, one of many):
Define dynamic disks block
dynamic "disks" {
for_each = var.vm_disk
iterator = item
content {
dynamic "ide" {
for_each = item.value.ide[*]
content {
disk {
# ... argument assignments from ide.value
}
}
}
dynamic "sata" {
for_each = item.value.sata[*]
content {
disk {
# ... argument assignments from sata.value
}
}
}
dynamic "scsi" {
for_each = item.value.scsi[*]
content {
disk {
# ... argument assignments from scsi.value
}
}
}
dynamic "virtio" {
for_each = item.value.virtio[*]
content {
disk {
# ... argument assignments from virtio.value
}
}
}
}
}
Define variable:
variable "vm_disk" {
type = map(object({
ide = optional(object({
# (whatever attributes are needed for ide disks)
}))
sata = optional(object({
# (whatever attributes are needed for sata disks)
}))
scsi = optional(object({
# (whatever attributes are needed for scsi disks)
}))
virtio = optional(object({
# (whatever attributes are needed for virtio disks)
}))
}))
}
And then do use the variable in the module:
vm_disks =
{
# some object
}
Which would then be "parsed" to correct disks block. There is currently no other way to pass it to the resource in the module and its imposible for me to do it correctly. I've spend quite a lot of hours rying to make itwork somehow.
@hagaram did something happen? Unfortunately, I don't have working examples either. I've tried a lot of things
Same problem.
Unfortunately, I didn't make it either to create a module with this new nested disks
block design.
In older versions of the provider there was a nice way to use a dynamic disk
block because disk
could be specified multiple times and you could pass multiple disks in a variable as a map like this:
variables.tf:
variable "disks" {
type = map(any)
default = {}
}
main.tf:
resource "proxmox_vm_qemu" "proxmox_vm" {
....
dynamic "disk" {
for_each = var.disks
content {
size = disk.value.size
type = disk.value.type
storage = disk.value.storage
backup = disk.value.backup
cache = disk.value.cache
}
}
}
module "test-vm" {
source = "git::https://....."
disks = {
"disk1" = {
size = "20G",
type = "virtio",
storage = "datastore_name",
backup = 1,
cache = "directsync",
}
"disk2" = {
size = "30G"
type = "virtio"
storage = "vdatastore_name"
backup = 1
cache = "directsync"
}
}
}
I have the same problem. I tried to upgrade my code to use the new disks
block but my disks are dynamic specified through a config variable. Having to specify disk1
, disk2
etc won't work. I'll have to use the old version of this package until this is fixed.
I thought #794 had some insight on how to do this in the comments.
@Tinyblargon Could you point me to where exactly? From my understanding, the overhaul of the disks block doesn't support dynamic disks based on your comment here: https://github.com/Telmate/terraform-provider-proxmox/pull/794#issuecomment-1843051202
@comminutus it doesn't support dynamic blocks in the traditional sense, but this https://github.com/Telmate/terraform-provider-proxmox/pull/794#issuecomment-1598269870 explains how you could do it when you use it in a module.
Thanks for everyone's efforts to progress the Proxmox provider. I'm keen to use the newer v3 release. I've been using the provider to deploy Flatcar Linux VMs with Butane/Ignition configurations.
I can see the provider can be used directly in a Terraform script, except I (and others) seem to be using the provider in a Terraform module. I'm thinking the design has been driven by the underlying QEMU/Proxmox Go API, rather than a combination of this and typical uses of the provider. It would be great if there was a test case/example of this (I was trying to get an example of my module deploying and have become stuck).
IMHO the new disk configuration is too complex. Looking at the code base, there is a large and deep schema definition for the disks parameter. That said - the design is highly orthogonal. I tried to get ChatGPT to write a definition for a module variable definition given the schema, but it all got too big and repetitive.
I know there are design constraints with Proxmox (and QEMU), but it would be great if the disks were a separate resource to the VM. Provisioning disks shouldn't be so coupled to compute resources. I would really like to be able to have disks that have a lifecycle that is independent to the VM - so I can mark them as 'prevent_destroy'.
I'm stuck providing an interface to my module to allow the usage of the Proxmox provider. How are people getting this working today? Has anyone written a huge definition for a disks
variable?
it doesn't support dynamic blocks in the traditional sense, but this https://github.com/Telmate/terraform-provider-proxmox/pull/794#issuecomment-1598269870 explains how you could do it when you use it in a module.
It's technically possible, but you have to code every single possible block with a for_each
and a filter, that filters to just that one specific value. So in reality - it can't really be made generic without a massive amount of code duplication in the module. I'm not sure what the driving force was to structure it in such a way, but this use-case was certainly not accounted for.
A friendlier way would have been to flatten it out to an extent, for example instead of disks>scsi>scsi[0-N]
blocks making it just one block disks
with a flat list of configurations where the type and index is an attribute.
I'm stuck providing an interface to my module to allow the usage of the Proxmox provider. How are people getting this working today? Has anyone written a huge definition for a disks variable?
@gbrackley I just hacked it by copy pasting the configuration to the extent that I'm using it. It's ugly but it works... Let's hope this change gets reverted.
Hi. I too ran into this problem when I decided to upgrade my home lab to version 8, and with it terraform provider. Spent several hours experimenting with dynamic block. But I couldn't think of a better solution than adding an if condition to the block. It forces to use code repetitions, looks not elegant, shame to show such a thing=). But it works. I will show only an example with 4 disks.
disks {
virtio {
virtio0 {
disk {
storage = "${var.datastore_name["${var.main_disk.datastore}"]}"
size = "${var.main_disk_size}G"
}
}
dynamic "virtio1" {
for_each = { for i, disk in var.disks: i => disk if i == 0 }
content {
disk {
storage = var.datastore_name["${virtio1.value["datastore"]}"]
size = "${virtio1.value["size_gb"]}G"
}
}
}
dynamic "virtio2" {
for_each = { for i, disk in var.disks: i => disk if i == 1 }
content {
disk {
storage = var.datastore_name["${virtio2.value["datastore"]}"]
size = "${virtio2.value["size_gb"]}G"
}
}
}
dynamic "virtio3" {
for_each = { for i, disk in var.disks: i => disk if i == 2 }
content {
disk {
storage = var.datastore_name["${virtio3.value["datastore"]}"]
size = "${virtio3.value["size_gb"]}G"
}
}
}
dynamic "virtio4" {
for_each = { for i, disk in var.disks: i => disk if i == 3 }
content {
disk {
storage = var.datastore_name["${virtio4.value["datastore"]}"]
size = "${virtio4.value["size_gb"]}G"
}
}
}
}
}
That is, in the module you need to describe all 15 potentially used disks, or as many as you plan to use in the VM.
yuck, I'd much rather stick with the older version with much simpler code.
I tried to create a few VMs and I wanted have vars like:
vm_configs = {
group1 = {
instances = 2
cores = 1
....
disks = {
disk1 = {...}
disk2 = {...}
}
}
group2 = {
instances = 4
cores = 2
....
disks = {
disk1 = {...}
disk2 = {...}
}
}
}
and it's not possible with new struct because of disks
.
I have spent 4 days solving this problem and have not come to a normal variant. I can't use disk creation as before without writing a large piece of code, which kills flexibility when using the module. Maybe to return the work with disks as it was before? The current implementation is very inconvenient for a large number of people who write modules for this provider. Also, since the new release with these changes, I have not found a single updated module on the net.
@Tinyblargon As I understand it, you were the initiator of this innovation of working with disks, judging by "Pull requests". And judging by your description in "Bio", then you can suggest the right solution to this problem, which will help a lot of people. https://github.com/Telmate/terraform-provider-proxmox/issues/969
Any update guys ?
@sach3000 a base implementation has been done but it's waiting on more feedback as we don't want to have a breaking change again in the future. #986
@sach3000 a base implementation has been done but it's waiting on more feedback as we don't want to have a breaking change again in the future. #986
Ok. Thanks. Can you show an example of how to use disk descriptions through a module?
My two cents, I do not agree with this statement:
Maybe to return the work with disks as it was before?
Please don't do this! The latest disk block solution is more flexible, dynamic and comprehensive than the previous ones. It can handle disks, cds, cloud-init drives, etc, possible switch between interfaces ide/sata/scsi a.o., and all actions within one block!
And as proof of concept, below is my example of how dynamic disk blocks can be handled with the latest version of the provider.
Implemented and tested on:
Terraform code examples with snippets:
variables.tf
...
variable "proxmox_vm_qemu" {
type = object({
disks = optional(object({
scsi = optional(object({
# disk0 (system disk)
scsi0 = optional(object({
disk = optional(object({
backup = optional(bool, true)
cache = optional(string, "")
emulatessd = optional(bool, false)
format = optional(string, "raw")
iothread = optional(bool, true)
replicate = optional(bool, true)
size = optional(string, "10G")
storage = optional(string, "local-lvm")
}), {})
}), {})
# disk1 (optional)
scsi1 = optional(object({
disk = optional(object({
backup = optional(bool)
cache = optional(string)
emulatessd = optional(bool)
format = optional(string)
iothread = optional(bool)
replicate = optional(bool)
size = optional(string)
storage = optional(string)
}), {})
}), {})
# disk2 (optional)
scsi2 = optional(object({
disk = optional(object({
backup = optional(bool)
cache = optional(string)
emulatessd = optional(bool)
format = optional(string)
iothread = optional(bool)
replicate = optional(bool)
size = optional(string)
storage = optional(string)
}), {})
}), {})
# init-cloud drive (optional)
scsi10 = optional(object({
cloudinit = optional(object({
storage = optional(string)
}), {})
}), {})
}), {})
}), {})
# efi disk (optional)
efidisk = optional(object({
efitype = optional(string)
storage = optional(string)
}), {})
})
}
...
terraform.tfvars
...
proxmox_vm_qemu = {
disks = {
scsi = {
# disk0 (system disk)
scsi0 = {
disk = {
backup = null
cache = null
emulatessd = null
format = null
iothread = null
replicate = null
size = null
storage = null
}
}
# disk1 (optional)
scsi1 = {
disk = {
backup = null
cache = null
emulatessd = null
format = null
iothread = null
replicate = null
size = null
storage = null
}
}
# disk2 (optional)
scsi2 = {
disk = {
backup = null
cache = null
emulatessd = null
format = null
iothread = null
replicate = null
size = null
storage = null
}
}
# init-cloud drive (optional)
scsi10 = {
cloudinit = {
storage = null
}
}
}
}
# efi disk (optional)
efidisk = {
efitype = null
storage = null
}
}
...
main.tf
...
resource "proxmox_vm_qemu" "cloud_vm_from_packer_template" {
...
disks {
scsi {
# disk0 (system disk)
dynamic "scsi0" {
for_each = var.proxmox_vm_qemu.disks.scsi.scsi0.disk.size != null ? [0] : []
content {
dynamic "disk" {
for_each = var.proxmox_vm_qemu.disks.scsi.scsi0.disk.size != null ? [0] : []
content {
backup = var.proxmox_vm_qemu.disks.scsi.scsi0.disk.backup
cache = var.proxmox_vm_qemu.disks.scsi.scsi0.disk.cache
emulatessd = var.proxmox_vm_qemu.disks.scsi.scsi0.disk.emulatessd
format = var.proxmox_vm_qemu.disks.scsi.scsi0.disk.format
iothread = var.proxmox_vm_qemu.disks.scsi.scsi0.disk.iothread
replicate = var.proxmox_vm_qemu.disks.scsi.scsi0.disk.replicate
size = var.proxmox_vm_qemu.disks.scsi.scsi0.disk.size
storage = var.proxmox_vm_qemu.disks.scsi.scsi0.disk.storage
}
}
}
}
# disk1 (optional)
dynamic "scsi1" {
for_each = var.proxmox_vm_qemu.disks.scsi.scsi1.disk.size != null ? [0] : []
content {
dynamic "disk" {
for_each = var.proxmox_vm_qemu.disks.scsi.scsi1.disk.size != null ? [0] : []
content {
backup = var.proxmox_vm_qemu.disks.scsi.scsi1.disk.backup
cache = var.proxmox_vm_qemu.disks.scsi.scsi1.disk.cache
emulatessd = var.proxmox_vm_qemu.disks.scsi.scsi1.disk.emulatessd
format = var.proxmox_vm_qemu.disks.scsi.scsi1.disk.format
iothread = var.proxmox_vm_qemu.disks.scsi.scsi1.disk.iothread
replicate = var.proxmox_vm_qemu.disks.scsi.scsi1.disk.replicate
size = var.proxmox_vm_qemu.disks.scsi.scsi1.disk.size
storage = var.proxmox_vm_qemu.disks.scsi.scsi1.disk.storage
}
}
}
}
# disk2 (optional)
dynamic "scsi2" {
for_each = var.proxmox_vm_qemu.disks.scsi.scsi2.disk.size != null ? [0] : []
content {
dynamic "disk" {
for_each = var.proxmox_vm_qemu.disks.scsi.scsi2.disk.size != null ? [0] : []
content {
backup = var.proxmox_vm_qemu.disks.scsi.scsi2.disk.backup
cache = var.proxmox_vm_qemu.disks.scsi.scsi2.disk.cache
emulatessd = var.proxmox_vm_qemu.disks.scsi.scsi2.disk.emulatessd
format = var.proxmox_vm_qemu.disks.scsi.scsi2.disk.format
iothread = var.proxmox_vm_qemu.disks.scsi.scsi2.disk.iothread
replicate = var.proxmox_vm_qemu.disks.scsi.scsi2.disk.replicate
size = var.proxmox_vm_qemu.disks.scsi.scsi2.disk.size
storage = var.proxmox_vm_qemu.disks.scsi.scsi2.disk.storage
}
}
}
}
# init-cloud drive (optional)
dynamic "scsi10" {
for_each = var.proxmox_vm_qemu.disks.scsi.scsi10.cloudinit.storage != null ? [0] : []
content {
dynamic "cloudinit" {
for_each = var.proxmox_vm_qemu.disks.scsi.scsi10.cloudinit.storage != null ? [0] : []
content {
storage = var.proxmox_vm_qemu.disks.scsi.scsi10.cloudinit.storage
}
}
}
}
}
}
# efi disk (optional)
dynamic "efidisk" {
for_each = var.proxmox_vm_qemu.efidisk.storage != null ? [0] : []
content {
efitype = var.proxmox_vm_qemu.efidisk.efitype
storage = var.proxmox_vm_qemu.efidisk.storage
}
}
...
}
...
This example is for:
So, if you want create additional 2x data disks, you have to modify only terraform.tfvars
, for example:
...
proxmox_vm_qemu = {
disks = {
...
# disk1 (optional)
scsi1 = {
disk = {
backup = true
cache = ""
emulatessd = false
format = "raw"
iothread = true
replicate = true
size = "1G"
storage = "local-lvm"
}
}
# disk2 (optional)
scsi2 = {
disk = {
backup = true
cache = ""
emulatessd = false
format = "raw"
iothread = true
replicate = true
size = "1G"
storage = "local-lvm"
}
}
...
}
...
}
...
For adding Cloud-int drive and Efi disk:
...
proxmox_vm_qemu = {
disks = {
scsi = {
...
# init-cloud drive (optional)
scsi10 = {
cloudinit = {
storage = "local-lvm"
}
}
}
}
# efi disk (optional)
efidisk = {
efitype = "4m"
storage = "local-lvm"
}
}
...
To delete/deattach these optional disks return parameters values to null
.
Hope these examples helps you achieve your goals.
Hey @maksimsamt,
I ended up doing a module using dynamics and conditionals, it works indead but it add (i believe) a lot of duplications in the module.
Instead of defining one disk variable we have to define all of them (scsi0, scsc1, virtio1..).
However even if it's not the place, I wanted to say, I found that the new version works really well and is pretty fast. Good job.
@maksimsamt , would you mind to share your packer configuration? (I'm stuck in my deployment and I suspect that Iḿ missing something in packer)
@maksimsamt , would you mind to share your packer configuration? (I'm stuck in my deployment and I suspect that Iḿ missing something in packer)
Unfortunately, I don’t think this is the right place to share information about the Packer code/configuration. This is a completely different topic and requires a separate discussion, probably in Hashicorp Packer discussion hub, not here.
A slightly different approach than https://github.com/Telmate/terraform-provider-proxmox/issues/907#issuecomment-2138076259, to cover dynamic functionality of for_each
:
variables.tf
...
variable "proxmox_vm_qemu" {
type = object({
# disks (required)
disks = object({
# scsi disks (required)
scsi = object({
# disk0 (required)
scsi0 = object({
disk = list(object({
backup = bool
cache = string
emulatessd = bool
format = string
iothread = bool
replicate = bool
size = string
storage = string
}))
})
# disk1 (optional)
scsi1 = optional(object({
disk = optional(list(object({
backup = optional(bool)
cache = optional(string)
emulatessd = optional(bool)
format = optional(string)
iothread = optional(bool)
replicate = optional(bool)
size = optional(string)
storage = optional(string)
})), [])
}), {})
# disk2 (optional)
scsi2 = optional(object({
disk = optional(list(object({
backup = optional(bool)
cache = optional(string)
emulatessd = optional(bool)
format = optional(string)
iothread = optional(bool)
replicate = optional(bool)
size = optional(string)
storage = optional(string)
})), [])
# init-cloud drive (optional)
scsi10 = optional(object({
cloudinit = optional(list(object({
storage = optional(string)
})), [])
}), {})
}), {})
}), {})
})
}
...
terraform.tfvars
...
proxmox_vm_qemu = {
# disks (required)
disks = {
# scsi (disks: 1xrequired, 2xoptional; cloudinit drive: 1xoptional)
scsi = {
# disk0 (required)
scsi0 = {
disk = [{
backup = true
cache = ""
emulatessd = false
format = "raw"
iothread = true
replicate = true
size = "20G"
storage = "local-lvm"
}]
}
# disk1 (optional)
scsi1 = {
disk = []
}
# disk2 (optional)
scsi2 = {
disk = []
}
# init-cloud drive (optional)
scsi10 = {
cloudinit = []
}
}
}
...
main.tf
...
resource "proxmox_vm_qemu" "cloud_vm_from_packer_template" {
...
# disks (required)
disks {
# scsi disks (required)
scsi {
# disk0 (required)
dynamic "scsi0" {
for_each = var.proxmox_vm_qemu.disks.scsi.scsi0.disk[*]
content {
dynamic "disk" {
for_each = var.proxmox_vm_qemu.disks.scsi.scsi0.disk[*]
iterator = disk_scsi0
content {
backup = disk_scsi0.value.backup
cache = disk_scsi0.value.cache
emulatessd = disk_scsi0.value.emulatessd
format = disk_scsi0.value.format
iothread = disk_scsi0.value.iothread
replicate = disk_scsi0.value.replicate
size = disk_scsi0.value.size
storage = disk_scsi0.value.storage
}
}
}
}
# disk1 (optional)
dynamic "scsi1" {
for_each = var.proxmox_vm_qemu.disks.scsi.scsi1.disk[*]
content {
dynamic "disk" {
for_each = var.proxmox_vm_qemu.disks.scsi.scsi1.disk[*]
iterator = disk_scsi1
content {
backup = disk_scsi1.value.backup
cache = disk_scsi1.value.cache
emulatessd = disk_scsi1.value.emulatessd
format = disk_scsi1.value.format
iothread = disk_scsi1.value.iothread
replicate = disk_scsi1.value.replicate
size = disk_scsi1.value.size
storage = disk_scsi1.value.storage
}
}
}
}
# disk2 (optional)
dynamic "scsi2" {
for_each = var.proxmox_vm_qemu.disks.scsi.scsi2.disk[*]
content {
dynamic "disk" {
for_each = var.proxmox_vm_qemu.disks.scsi.scsi2.disk[*]
iterator = disk_scsi2
content {
backup = disk_scsi2.value.backup
cache = disk_scsi2.value.cache
emulatessd = disk_scsi2.value.emulatessd
format = disk_scsi2.value.format
iothread = disk_scsi2.value.iothread
replicate = disk_scsi2.value.replicate
size = disk_scsi2.value.size
storage = disk_scsi2.value.storage
}
}
}
}
# init-cloud drive (optional)
dynamic "scsi10" {
for_each = var.proxmox_vm_qemu.disks.scsi.scsi10.cloudinit[*]
content {
dynamic "cloudinit" {
for_each = var.proxmox_vm_qemu.disks.scsi.scsi10.cloudinit[*]
content {
storage = cloudinit.value.storage
}
}
}
}
}
}
...
}
...
up
@arsensimonyanpicsart
Hi @Tinyblargon and @maksimsamt,
I've been using the Proxmox Terraform provider for a while (I have several github projects using it), and I appreciate the work that goes into maintaining it. However, the recent changes to how disks are defined have significantly complicated my workflow.
Previously, I could define disks in a much simpler and more concise way:
disk {
type = "scsi"
size = "18G"
storage = "vg_nvme"
ssd = 1
}
Or when using dynamic
blocks:
dynamic "disk" {
for_each = var.disks
content {
type = disk.value.type
size = disk.value.size
storage = disk.value.storage
ssd = disk.value.ssd
}
}
Now, the new structure requires something like this:
disks {
scsi {
scsi0 {
disk {
size = 18
storage = "vg_nvme"
emulatessd = true
}
}
}
}
When using dynamic
, as shown in https://github.com/Telmate/terraform-provider-proxmox/issues/907#issuecomment-2162765751, the code becomes even longer and more cumbersome, which not only increases complexity but also introduces unnecessary verbosity.
These changes feel like a step backward in terms of usability. The new approach is more verbose, less intuitive, and makes managing multiple disks a real challenge. It almost feels like the provider isn't being tested in real-world scenarios where simplicity and maintainability are key.
I'd like to request a reconsideration of this change or, at the very least, some discussion around making the provider more dynamic-friendly
again. The previous method of defining disks was much more straightforward and aligned better with Terraform's philosophy of simplicity and readability.
Thank you for your hard work, and I hope we can find a middle ground that satisfies both usability and the new features.
Best regards,
Luciano Sampaio
@lsampaioweb #986 addresses this and how we are planning to change it. A lot of the code is done for it. Hopefully, i can get it ready before the end of the month.
This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs
This issue was closed because it has been inactive for 5 days since being marked as stale.
Hi, I have terraform proxmox module based on this provider and I have trouble to modify the module definition of a VM resource to work correctly with new disks layout --> https://github.com/Telmate/terraform-provider-proxmox/pull/892
How should I correctly define disks block in variable to be able to pass it verbatim, or at least provide such structure, to be able to parse/create the corrects disks block in resource definition inside of the module?