Closed mdittbrenner215 closed 5 years ago
I also tried just creating a disk separately and deleting disk and that worked fine. Looks like its an issue when you create a VM with a secondary drive and try and delete all objects.
vra_machine.vms[0]: Destroying... [id=6b69d80434f5eccf] vra_machine.vms[0]: Still destroying... [id=6b69d80434f5eccf, 10s elapsed] vra_machine.vms[0]: Destruction complete after 10s vra_block_device.vms: Destroying... [id=7f925b8589d8167559624899b07c8]
Error: unknown error (status 204): {resp:0xc00011a1b0}
Its deleting the VM which includes the disk and then tries to delete the block device but it doesnt exist because it is being deleted with the VM.
Also how can we keep the block device on deleting the VM?
Wanted to update based on what I'm seeing. I do see the same issue with the 204 error. I think this is due to the swagger not defining this status code so the SDK code doesn't get generated. This likely can get patched in the SDK while we wait for the correct changes in the swagger doc. This should clear up the state issue.
Also, I am seeing a different path for a stand-alone disk (produces a request tracker and awaits the destruction). This seems to imply the disk is getting deleted when the machine is deleted first. Need to dig into this issue separately and likely need it raised to the vRA team rather than here.
@markpeek not sure if i should add to this or another issue.
When i remove the disk block to the VM and do a terraform apply, terraform tells me one change but not seeing the disk being removed from the VM.
@mdittbrenner215 this was auto-closed via a commit to fix the 204 issue. Likely best to move any additional issues into new issues after verifying with this new release.
Thanks fixes that issue! Appreciate it
If i delete a device with block compute attached, keeps throwing my tfstate file off.
vra_machine.vms[0]: Destroying... [id=733e9e938e3d66cf] vra_machine.vms[0]: Still destroying... [id=733e9e938e3d66cf, 10s elapsed] vra_machine.vms[0]: Still destroying... [id=733e9e938e3d66cf, 20s elapsed] vra_machine.vms[0]: Destruction complete after 20s vra_block_device.vms: Destroying... [id=7f925b8589d8167559621948975c9]
Error: unknown error (status 204): {resp:0xc0000e42d0}
HQSML-1712491:bin mdittb638$ terraform destroy data.vra_project.this: Refreshing state... data.vra_cloud_account_vsphere.this: Refreshing state... data.vra_network.this: Refreshing state... vra_block_device.vms: Refreshing state... [id=7f925b8589d8167559621948975c9]
Error: unknown error (status 404): {resp:0xc0001babd0}
HQSML-1712491:bin mdittb638$ terraform destroy data.vra_project.this: Refreshing state... data.vra_cloud_account_vsphere.this: Refreshing state... data.vra_network.this: Refreshing state... vra_block_device.vms: Refreshing state... [id=7f925b8589d8167559621948975c9]
Error: unknown error (status 404): {resp:0xc0000f8480}
HQSML-1712491:bin mdittb638$ terraform apply data.vra_project.this: Refreshing state... data.vra_cloud_account_vsphere.this: Refreshing state... data.vra_network.this: Refreshing state... vra_block_device.vms: Refreshing state... [id=7f925b8589d8167559621948975c9]
Error: unknown error (status 404): {resp:0xc0000dc3f0}
Only way to start working with terraform is to get rid of tfstate file and copy from backupfile.
Terraform is below
resource "vra_block_device" "vms" { capacity_in_gb = 500 name = "terraform_vra_block_device" project_id = data.vra_project.this.id }
resource "vra_machine" "vms" { count = "${var.instance_count}" description = "terrafrom test machine" project_id = data.vra_project.this.id image = "Cloudbase-init" flavor = "c5.large" name = "tf-q1-${count.index}" nics { network_id = data.vra_network.this.id } disks { block_device_id = vra_block_device.vms.id name = "disk2" } depends_on = [vra_block_device.vms] }