Closed tysen closed 1 year ago
This seems to also happen when using count
which I daresay is even more unexpected (and I'm not sure if it's related):
Here's the hcl:
resource "aws_ebs_volume" "master-logs" {
availability_zone = element(var.availability_zones, count.index)
count = var.master_nodes
size = var.logs_size
type = "gp2"
tags = {
# removed because irrelevant
}
}
and the output:
❯ terraform destroy -target='aws_ebs_volume.master-logs[0]'
aws_ebs_volume.master-logs[0]: Refreshing state... [id=vol-0180cde1f47bce502]
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:
# aws_ebs_volume.master-logs[0] will be destroyed
- resource "aws_ebs_volume" "master-logs" {
- arn = "arn:aws:ec2:us-east-1:896521799855:volume/vol-0180cde1f47bce502" -> null
- availability_zone = "us-east-1a" -> null
- encrypted = false -> null
- id = "vol-0180cde1f47bce502" -> null
- iops = 120 -> null
- size = 40 -> null
- tags = {
} -> null
- type = "gp2" -> null
}
# aws_volume_attachment.master-logs[0] will be destroyed
- resource "aws_volume_attachment" "master-logs" {
- device_name = "/dev/xvdi" -> null
- id = "vai-1953403477" -> null
- instance_id = "i-0db625ed53e684f74" -> null
- volume_id = "vol-0180cde1f47bce502" -> null
}
# aws_volume_attachment.master-logs[1] will be destroyed
- resource "aws_volume_attachment" "master-logs" {
- device_name = "/dev/xvdi" -> null
- id = "vai-4050681587" -> null
- instance_id = "i-03295fb3616f81a23" -> null
- volume_id = "vol-046bedf2c64d5b3ce" -> null
}
# aws_volume_attachment.master-logs[2] will be destroyed
- resource "aws_volume_attachment" "master-logs" {
- device_name = "/dev/xvdi" -> null
- id = "vai-2673450254" -> null
- instance_id = "i-07c719ddea9b10ce2" -> null
- volume_id = "vol-0df9cd28b13a7cddf" -> null
}
Plan: 0 to add, 0 to change, 4 to destroy.
What I'm trying to do here is troubleshoot the build process for one node of a three node cluster (without killing the other two nodes). My current workaround is to manually delete the resources (in the AWS console) and then delete the state objects, but being able to use the destroy command would be a lot more convenient and intuitive.
How can this still be an issue 3 years later?
❯ terraform -version
Terraform v1.3.3
on linux_amd64
+ provider registry.terraform.io/hashicorp/time v0.9.0
+ provider registry.terraform.io/ovh/ovh v0.22.0
+ provider registry.terraform.io/terraform-provider-openstack/openstack v1.49.0
vms.tf (simplified)
vms = { 0: { name = vm0 },
1: { name = vm1 },
2: { name = vm2 },}
## VM ##
resource "openstack_compute_instance_v2" "vm_instance" {
for_each = var.vms
provider = openstack.ovh
name = "vm_${each.value.name}"
image_name = "debian"
flavor_name = "s1-2"
}
## VOLUMES ##
resource "openstack_blockstorage_volume_v3" "vm_volume" {
for_each = var.vms
provider = openstack.ovh
name = "vm_${each.value.name}_volume"
size = 10
volume_type = "classic"
}
# ## ATTACH VOLUMES ##
resource "openstack_compute_volume_attach_v2" "vm_attach" {
for_each = var.vms
provider = openstack.ovh
instance_id = openstack_compute_instance_v2.vm[each.key].id
volume_id = openstack_blockstorage_volume_v3.vm_volume[each.key].id
}
❯ terraform apply -destroy -target='openstack_compute_instance_v2.vm_instance["2"]'
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:
# openstack_compute_instance_v2.vm_instance["2"] will be destroyed
- resource "openstack_compute_instance_v2" "vm_instance" {
[...]
- id = "wxc" -> null
[...]
- name = "vm_vm2" -> null
}
# openstack_compute_volume_attach_v2.vm_attach["0"] will be destroyed
- resource "openstack_compute_volume_attach_v2" "vm_attach" {
[...]
}
# openstack_compute_volume_attach_v2.vm_attach["1"] will be destroyed
- resource "openstack_compute_volume_attach_v2" "vm_attach" {
[...]
}
# openstack_compute_volume_attach_v2.vm_attach["2"] will be destroyed
- resource "openstack_compute_volume_attach_v2" "vm_attach" {
- device = "/dev/sdb" -> null
- id = "wxc/*********" -> null
- instance_id = "wxc" -> null
- region = "****" -> null
- volume_id = "********" -> null
}
Plan: 0 to add, 0 to change, 4 to destroy.
╷
│ Warning: Resource targeting is in effect
│
│ You are creating a plan with the -target option, which means that the result of this plan may not represent all of the changes requested by the current configuration.
│
│ The -target option is not for routine use, and is provided only for exceptional situations such as recovering from errors or mistakes, or when Terraform specifically suggests to use it as part of an error message.
╵
So this action will destroy all attachments of all my VMs => Production crash
Destroy only target attachments
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Terraform plans to destroy an instance not affected by the config change when using
for_each
. I think this may be related to https://github.com/hashicorp/terraform/issues/22549 in which case the answer may again be "working as designed" but I'm filing this anyway in case I'm wrong, because this particular case does not include the usage of-target
, and to document additional user friction with this behavior.See the issue filed against our provider: https://github.com/terraform-providers/terraform-provider-google/issues/4286
Terraform Version
Terraform Configuration Files
foo.tf
bar.tfvars
Expected Behavior
A change to one disk should affect only that disk and the dependent instance.
Actual Behavior
One disk and both instances are planned for destruction.
Steps to Reproduce
terraform apply
centos-7
tocentos-6
terraform plan
Additional Context
References
https://github.com/hashicorp/terraform/issues/22549 https://github.com/terraform-providers/terraform-provider-google/issues/4286