Closed mattwillsher closed 7 months ago
I've attempted to reproduce the problem using the latest version 0.1.1, following the steps you've described, but everything seems to be functioning as expected on my end. Here's the output I received:
$ terraform plan
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
+ create
Terraform will perform the following actions:
# incus_storage_bucket.this will be created
+ resource "incus_storage_bucket" "this" {
+ config = {
+ "size" = "100MiB"
}
+ location = (known after apply)
+ name = "bucket"
+ pool = "default"
+ target = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if
you run "terraform apply" now.
$ terraform apply
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
+ create
Terraform will perform the following actions:
# incus_storage_bucket.this will be created
+ resource "incus_storage_bucket" "this" {
+ config = {
+ "size" = "100MiB"
}
+ location = (known after apply)
+ name = "bucket"
+ pool = "default"
+ target = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
incus_storage_bucket.this: Creating...
incus_storage_bucket.this: Creation complete after 1s [name=bucket]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
This leads me to believe that the issue might be related to a specific configuration or environment setup on your side. To better understand and assist you, could you please provide some additional details about your setup? Specifically, it would be helpful to know:
Curious.
Server, incus installed from zabbly APT repo. Storage pool is lvm with thin pool disabled.
config:
lvm.use_thinpool: "false"
lvm.vg_name: vg_incus
source: vg_incus
volatile.initial_source: vg_incus
description: ""
name: default
driver: lvm
> incus version
Client version: 0.6
Server version: 0.6
> cat /etc/debian_version
12.5
Client, running under WSL2 on Windows 11. Incus install from homebrew.
❯ incus version
Client version: 0.6
Server version: 0.6
❯ terraform version
Terraform v1.7.5
on linux_amd64
+ provider registry.terraform.io/hashicorp/local v2.5.1
+ provider registry.terraform.io/lxc/incus v0.1.1
❯ cat /etc/redhat-release
AlmaLinux release 9.0 (Emerald Puma)
Thanks to your configuration details, I was able to replicate the issue. It appears when using the lvm
driver for the storage pool during the creation of a storage bucket for this type of pool.
@stgraber @adamcstephens, to address this behavior, we might need to consider verifying if lvm
is selected as the storage driver and then prompt the user to include block.filesystem
and block.mount_options
in their configuration block. Do you think this check should be incorporated directly into the provider's logic, or would it be more appropriate to outline this requirement in the provider's documentation?
@mattwillsher, as a temporary solution, please adjust your storage bucket configuration as follows:
resource "incus_storage_bucket" "this" {
name = "bucket"
pool = "default"
config = {
"block.filesystem" = "ext4"
"block.mount_options" = "discard"
"size" = "100MiB"
}
}
This configuration should circumvent the issue for now. I'm eager to hear your thoughts and further suggestions from @stgraber and @adamcstephens on the proposed fix.
Looking forward to your input.
Are we not able to merge this remote state into the stored config
attribute? My preference would be that this just gets computed and stored instead of requiring users to have it in place. The error in the first post indicates to me this is something that needs to be fixed anyway since we shouldn't be generating an inconsistent state.
Are we not able to merge this remote state into the stored
config
attribute? My preference would be that this just gets computed and stored instead of requiring users to have it in place. The error in the first post indicates to me this is something that needs to be fixed anyway since we shouldn't be generating an inconsistent state.
Oh, definitely, that's a good point. I will try to make the necessary adjustments to implement this effectively.
Given:
Workaround: