Open hashibot opened 7 years ago
I'm having the same problem with v0.10.4
Does it work if you supply the replication rule id field? I believe AWS is auto-assigning one if you don't explicitly declare, which is why Terraform notes the drift. In our environment we specify it with an id in the Terraform configuration and do not see this behavior.
Thanks @bflad this solves it. Though, this behavior is different from that of other auto generated id fields.
We might be able to help here with better documentation or possibly an under the hood change to the configuration schema, making the id field a computed field, if its possible/makes sense.
Personally I think we can improve the documentation here to explain this:
id - (Optional) Unique identifier for the rule.
Really means something along the lines of:
id - (Optional) Unique identifier for the rule. While it is optional, AWS will auto-assign the ID and Terraform will detect this as drift each subsequent plan
There is not currently a way to use the generic Terraform resource lifecycle { ignore_changes=["X"] }
here since it's a sub-configuration (that
I'm aware of anyways) so in essence, maybe it should just say (required)
instead to prevent any confusion if making it a computed field isn't an option.
@bflad please make id
a required parameter for replication rules; the provider's current behavior is needlessly confusing.
As I've been learning the codebase, we can actually keep this attribute optional, but set it on read so it doesn't show drift if it is automatically generated by AWS. I'm not sure if I'll have time to submit a PR for a few days though.
I was able to work around this by using the random_id resource:
resource "random_id" "replication" {
byte_length = 32
}
resource "aws_s3_bucket" "source" {
provider = "aws.src"
bucket = "${var.short_env}-${var.s3bucket_name}-${lower(random_id.s3.b64_url)}"
acl = "${var.acl}"
force_destroy = "${var.force_destroy}"
tags = "${merge(var.common_tags, var.tags)}"
versioning {
enabled = "true"
}
replication_configuration {
role = "${aws_iam_role.s3_replication.arn}"
rules {
id = "${random_id.replication.b64_std}"
prefix = "${var.replication_prefix}"
status = "${var.replication_status}"
destination {
bucket = "${aws_s3_bucket.destination.arn}"
storage_class = "${var.dst_storage_class}"
}
}
}
}
still an issue in
Terraform v0.11.10
+ provider.aws v1.58.0
Also still an issue in
Terraform v0.11.11
+ provider.aws v1.60.0
Has anyone addressed this bug, yet? I am experiencing the same problem as described above with Terraform v0.11.11
provider "aws" (2.2.0)
Confirmed, same issue appears with v0.11.14
terraform version
Terraform v0.11.14
+ provider.aws v2.24.0
+ provider.local v1.3.0
+ provider.null v2.1.2
+ provider.template v2.1.2
Your version of Terraform is out of date! The latest version
is 0.12.6. You can update by downloading from www.terraform.io/downloads.html
This is still a problem in 0.12.7.
Terraform v0.12.20
Different issue, but similar result.
If using filter
, prefix
should be required rather than optional. AWS doesn't care if filter = {}, but tf adds filter = { prefix = "" }
resource "aws_s3_bucket" "origin-bucket" {
bucket = "715489234-origin-bucket"
acl = "private"
versioning {
enabled = true
}
replication_configuration {
role = "${aws_iam_role.replication.arn}"
rules {
status = "Enabled"
destination {
bucket = "arn:aws:s3:::${aws_s3_bucket.replicated-bucket.id}"
}
filter {
prefix = ""
}
}
}
}
This is still an issue in 12.25. Would be very nice to get a fix for this!
Seems Amazon is also quite opinionated on priority
. We could fix recreating resources by setting:
replication_configuration = {
role = "..."
rules = [
{
id = "..."
- priority = 1
+ priority = 0
status = "Enabled"
destination = {
[...]
}
filter = {}
}
}
]
Still happening in terraform v0.13.4 and terraform-aws-provider v3.10.0
I tried the priority change workaround but it didn't work.
Would simply changing the id to be a computed field in the schema be sufficient to fix this? Or am I missing some nuance there?
To confirm, we having been able to resolve this by specifying both the id
and priority
fields to a real value.
The issue is that without specifying an id
, then a random string will be computed and would then be calculated as a resource change.
replication_configuration {
role = aws_iam_role.replication.arn
rules {
id = "foobar_replication"
status = "Enabled"
priority = 0
destination {
bucket = aws_s3_bucket.foobar.arn
storage_class = "STANDARD"
}
}
}
Is there way to add the priority
to an lifecycleignore_changes
block?
still an issue even when specifying both the id and priority fields
I ran into this issue and worked around it by specifying filter {}
and explicitly setting delete_marker_replication_status
, in addition to id
and priority
.
I am having the same problem still in 3.70.0 (first seen in 3.67.0). Even if all the fields are set, lets say I want to change the priority or change the status to "Disabled". What ends up happening is the old rule gets marked as removed, a new rule is shown as added, and the plan includes additional lines, for an additional empty rule {}
section.
@tavin What happens when you try to disable the rule? Do you get a consistent plan? The only way I'm able to change the replication settings is to destroy and reapply the replication config...
Have the same issue so im refactoring to see whether any of the inputs variables have wrong values assigned to them as ive seen this issue before. aws_s3_bucket_replication_configuration seems to be the problem here and im also using aws provider 3.73.0
Seeing the same thing here - have created null resources to point to an aws cli script to get around this, but if any other workarounds exist, please post them!
Writing this in hopes that it saves someone else trouble.
I am able to reproduce the issue with the Terraform (1.1.5) and AWS provider (4.0.0).
It seems that unless you specify all of the following in the rule block, it will detect drift and try to recreate the replication rule resource(s):
Setting the above seems to be sufficient to avoid attempting to recreate the replication rules (even in a dynamic "rule" block populated with consistent data between runs).
Additionally, while not specifically relevant here, in previous versions of the AWS provider plugin (<4.0.0) resource documentation, there was a note in the docs explaining the importance of setting the lifecycle policy on the aws_s3_bucket resource. Unfortunately, this note is removed as of 4.0.0, however my tests indicate that it is still needed.
NOTE:
See the aws_s3_bucket_replication_configuration resource documentation to avoid conflicts. Replication configuration can only be defined in one resource not both. When using the independent replication configuration resource the following lifecycle rule is needed on the aws_s3_bucket resource.
ignore_changes = [
replication_configuration
]
}```
This issue was originally opened by @PeteGoo as hashicorp/terraform#13352. It was migrated here as part of the provider split. The original body of the issue is below.
Terraform Version
0.8.8 0.9.2
Affected Resource(s)
Terraform Configuration Files
Debug Output
Please provider a link to a GitHub Gist containing the complete debug output: https://www.terraform.io/docs/internals/debugging.html. Please do NOT paste the debug output in the issue; just paste a link to the Gist.
Panic Output
If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the
crash.log
.Expected Behavior
A plan after the first apply should be empty
Actual Behavior
The plan after the first apply shows changes in the replication_configuration
Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
terraform apply
terraform plan
Important Factoids
The id of the replication rule seems to be the only thing that changes in the plan. Perhaps it is being inconsistently used to calculate a hash for change detection?
References