exoscale / terraform-provider-exoscale

Terraform Exoscale provider
https://www.terraform.io/docs/providers/exoscale/
Mozilla Public License 2.0
30 stars 25 forks source link

[Bug]: Terraform loses sight of compute instance if you create it too small for block storage #375

Open jamielinux opened 3 months ago

jamielinux commented 3 months ago

Current Behavior

Let's say your config looks something like this:

resource "exoscale_block_storage_volume" "disk-01" {
  zone = local.zone
  name = "disk-01"
  size = 10
  lifecycle {
    prevent_destroy = true
  }
}

data "exoscale_template" "my_template" {
  zone = "ch-gva-2"
  name = "Linux Ubuntu 22.04 LTS 64-bit"
}

resource "exoscale_compute_instance" "my_instance" {
  zone                     = "ch-gva-2"
  name                     = "my-instance"
  template_id              = data.exoscale_template.my_template.id
  type                     = "standard.tiny"
  disk_size                = 10
  block_storage_volume_ids = [exoscale_block_storage_volume.disk-01.id]
}

When I run tofu apply I get:

Error: unable to parse attached instance ID: AttachBlockStorageVolumeToInstance: http response: invalid request: Request restricted: Instance size must be at least small

Running tofu state list, it lists the block storage but not the compute instance. But the compute instance actually exists and is running. This compute instance cannot be destroyed by Terraform, because Terraform doesn't know it exists (unless you manually import it, but I didn't try that).

Expected Behavior

If the VM is created, Terraform should have it in its state. Alternatively, I guess VM creation could be halted and the VM destroyed as part of the failure. In either case, Terraform state should match what actually happened.

Thank you!

Steps To Reproduce

No response

Provider Version

0.59.1

Terraform Version

OpenTofu v1.7.3

Relevant log output

No response

kobajagi commented 3 months ago

Thanks for report. I can reproduce the issue., we will look into improving error handling.

sauterp commented 3 months ago

We plan to migrate this resource to the new terraform framework and fix this at the same time. It's not a priority at the moment but it is planned.