Open ghost opened 5 years ago
Same problem.
Hey @vmorkunas 👋 Thank you for taking the time to file this issue! Given that there's been a number of AWS provider releases since you initially filed it, can you confirm whether you're still experiencing this behavior?
Guys,
I have the same problem on my stack. I tried @justinretzolk suggestion. To test the most current providers, however the error persists until the last (5.17.0) of this message. I noticed that terraform can add disks in order if they are in alphabetical/numeric order. So, in the case of @vmorkunas, I can add new disks as long as they respect the mentioned order hierarchy. But the problem is when upgrading an existing infrastructure, if you need to add or remove a disk that was already provisioned in a random order, the list indexes are lost and the process of redoing the detach/attach on all disks in the list is redone.
@justinretzolk can you help?
Snippet Code
module "ec2_instance" {
for_each = { for k, v in var.aws_ec2_instance : k => v }
source = "../../../../../../modules/ec2_instance"
ami = each.value.ami
name = each.value.name
timeouts = each.value.instance_timeout
instance_type = each.value.instance_type
subnet_id = each.value.subnet_id
key_name = each.value.key_name
disable_api_stop = each.value.disable_api_stop
disable_api_termination = each.value.disable_api_termination
metadata_options = each.value.metadata_options
iam_instance_profile = each.value.iam_instance_profile
hibernation = each.value.hibernation
availability_zone = each.value.availability_zone
root_block_device = each.value.root_block_device
associate_public_ip_address = each.value.associate_public_ip_address
vpc_security_group_ids = each.value.vpc_security_group_ids
tags = each.value.tags_instance
}
locals {
ebs_attachments = flatten([
for k, v in var.aws_ec2_instance : [
for idx in v.ebs_block_device_attachment : {
device_name = idx.device_name
volume_id = idx.volume_id
instance_id = module.ec2_instance[k].instance_id
}
]
])
}
resource "aws_volume_attachment" "this" {
for_each = { for k, v in local.ebs_attachments : k => v }
device_name = each.value.device_name
volume_id = each.value.volume_id
instance_id = each.value.instance_id
force_detach = true
}
variable "aws_ec2_instance" {
description = "Define data structure to provision ec2"
type = map(
object({
disable_api_termination = bool
disable_api_stop = bool
hibernation = bool
associate_public_ip_address = bool
instance_timeout = optional(map(string))
tags_instance = map(string)
vpc_security_group_ids = list(string)
name = string
instance_type = string
iam_instance_profile = optional(string)
ami = string
subnet_id = string
metadata_options = optional(map(string))
key_name = optional(string)
availability_zone = string
ebs_block_device_attachment = optional(list(object({
device_name = string
volume_id = string
})))
root_block_device = list(object({
volume_size = number
volume_type = string
throughput = number
encrypted = bool
}))
})
)
default = {}
}
aws_ec2_instance = {
instance-00 = {
tags_instance = {
"xxxx" = "xxxxxxxx"
"xxxx" = "xxxxxxxx"
"xxxx" = "xxxxxxxx"
"xxxx" = "xxxxxxxx"
"xxxx" = "xxxxxxxx"
}
name = "xxxxxxxx"
ami = "ami-xxxxxxxx"
instance_type = "m5a.large"
availability_zone = "xxxxxxxx"
iam_instance_profile = "xxxxxxxx"
subnet_id = "subnet-xxxxxxxx"
vpc_security_group_ids = ["sg-xxxxxxxx"]
associate_public_ip_address = false
disable_api_stop = false
disable_api_termination = true
hibernation = true
metadata_options = {
"http_endpoint" = "enabled"
"http_tokens" = "required"
"http_put_response_hop_limit" = 1
}
instance_timeout = {
"create" = "5m"
"update" = "5m"
"delete" = "5m"
}
root_block_device = [
{
encrypted = true
volume_type = "gp3"
throughput = 200
volume_size = 100
}
]
ebs_block_device_attachment = [
{
device_name = "/dev/sdf"
volume_id = "vol-xxxxxxxx"
},
{
device_name = "/dev/sdg"
volume_id = "vol-xxxxxxxx"
},
]
},
instance-01 = {
...
},
...
}
Plan output
...
# aws_volume_attachment.this["10"] must be replaced
-/+ resource "aws_volume_attachment" "this" {
~ id = "vai-xxxxxxxxxxxxxxx" -> (known after apply)
~ instance_id = "i-xxxxxxxxxxxxxxxxx" -> "i-xxxxxxxxxxxxxxxxx" # forces replacement
~ volume_id = "vol-xxxxxxxxxxxxxxx" -> "vol-xxxxxxxxxxxxxxx" # forces replacement
# (2 unchanged attributes hidden)
}
# aws_volume_attachment.this["11"] must be replaced
-/+ resource "aws_volume_attachment" "this" {
~ id = "vai-xxxxxxxxxxxxxxx" -> (known after apply)
~ instance_id = "i-xxxxxxxxxxxxxxxxx" -> "i-xxxxxxxxxxxxxxxxx" # forces replacement
~ volume_id = "vol-xxxxxxxxxxxxxxx" -> "vol-xxxxxxxxxxxxxxx" # forces replacement
# (2 unchanged attributes hidden)
}
# aws_volume_attachment.this["12"] will be created
+ resource "aws_volume_attachment" "this" {
+ device_name = "/dev/sdx"
+ force_detach = true
+ id = (known after apply)
+ instance_id = "i-xxxxxxxxxxxxxxxxx"
+ volume_id = "vol-xxxxxxxxxxxxxxx"
}
Terraform Version v1.4.0
Expected Behavior Attach and detach new EBS disks on EC2 instances.
Actual Behavior Disk is added/removed but all attachments created in the same module are recreated.
Providers used 4.67.0 until 5.17.0
This issue was originally opened by @vmorkunas as hashicorp/terraform#22975. It was migrated here as a result of the provider split. The original body of the issue is below.
Hello,
I have a code which works fine on all cases except when new EBS disk is added to the list. On second apply it destroys all EBS attachments which were created in same module and creates them again.
Plan output
Terraform Version
0.12.9
Expected Behavior
New EBS disk is added and attached to the EC2 instance.
Actual Behavior
Disk is added but all attachments created in the same module are recreated.
Steps to Reproduce