Open rishabhToshniwal opened 2 years ago
Facing same issue after moving to latest provider and defining aws_s3_bucket_lifecycle_configuration as a separate resource
Hey @rishabhToshniwal 👋 Thank you for taking the time to raise this! So that we have all of the necessary information in order to look into this, can you update the issue description to include all of the information requested in the bug report template?
name: 🐛 Bug Report about: If something isn't working as expected 🤔.
Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.
resource "aws_s3_bucket" "server-side-logging-bucket" { bucket = "${var.env_name}-server-side-logs" force_destroy = true tags = var.common_tags provisioner "local-exec" { when = destroy interpreter = ["python3", "-c"] command = <<EOT
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('${self.id}')
bucket.object_versions.all().delete()
EOT
on_failure = continue
}
}
resource "aws_s3_bucket_acl" "server-side-logging-bucket" { bucket = aws_s3_bucket.server-side-logging-bucket.id acl = "log-delivery-write" }
resource "aws_s3_bucket_versioning" "server-side-logging-bucket" { bucket = aws_s3_bucket.server-side-logging-bucket.id versioning_configuration { status = "Enabled" } } resource "aws_s3_bucket_lifecycle_configuration" "server-side-logging-bucket" { bucket = aws_s3_bucket.server-side-logging-bucket.id rule { id = "noncurrent-version-expiration-object-delete-marker" status = "Enabled" filter { prefix = "/" } noncurrent_version_expiration {
noncurrent_days = "${var.s3_lifecycle_noncurrent_version_expiration}"
}
expiration {
expired_object_delete_marker = true
}
} }
resource "aws_s3_bucket_server_side_encryption_configuration" "server-side-logging-bucket" { bucket = aws_s3_bucket.server-side-logging-bucket.id rule { apply_server_side_encryption_by_default { sse_algorithm = can(regex("^arn:aws:kms:",var.encrypt_at_rest_kms_key_id))?"aws:kms":"AES256" kms_master_key_id = can(regex("^arn:aws:kms:",var.encrypt_at_rest_kms_key_id))?var.encrypt_at_rest_kms_key_id:null } } }
Error: error waiting for S3 Lifecycle Configuration for bucket (test500-uat-server-side-logs) to reach expected rules status after update: timeout while waiting for state to become 'READY' (last state: 'NOT_READY', timeout: 3m0s)
[2022-07-24T10:59:09.421Z] [Error][ms360-logging-tf-module]
[2022-07-24T10:59:09.421Z] [Error][ms360-logging-tf-module] with aws_s3_bucket_lifecycle_configuration.server-side-logging-bucket,
[2022-07-24T10:59:09.421Z] [Error][ms360-logging-tf-module] on uploads3.tf line 30, in resource "aws_s3_bucket_lifecycle_configuration" "server-side-logging-bucket":
[2022-07-24T10:59:09.421Z] [Error][ms360-logging-tf-module] 30: resource "aws_s3_bucket_lifecycle_configuration" "server-side-logging-bucket" {
[2022-07-24T10:59:09.421Z] [Error][ms360-logging-tf-module]
[2022-07-24T10:59:09.972Z] [Ouptut][ms360-logging-tf-module] Releasing state lock. This may take a few moments..
It should not fail in timeout during Lifecycle Configuration for S3 bucket.
Error:
Error: error waiting for S3 Lifecycle Configuration for bucket (test500-uat-server-side-logs) to reach expected rules status after update: timeout while waiting for state to become 'READY' (last state: 'NOT_READY', timeout: 3m0s)
The issue occurs all the time in all the S3 buckets we are creating. On rerunning, the issue gets resolved.
terraform apply
aws provider version: 4.21.0
terraform version : 1.2.5
terragrunt version : 0.38.5
EC2 Linux 2
@justinretzolk any update regarding the above issue raised
@justinretzolk @rishabhToshniwal faced the same issue today. Used lifecycle configuration found here https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_lifecycle_configuration
Sometimes it actually created all the resources but upon destroying and rerunning again it timeout.
Any updates ?
Hey all 👋 Thank you for checking in on this. Unfortunately, I'm not able to provide an estimate on when this will be looked into due to the potential of shifting priorities (we prioritize work by count of ":+1:" reactions, as well as a few other things). For more information on how we prioritize, check out out prioritization guide.
I am, however, going to update the tags here, in hopes that someone from the community will pick this up if the team isn't able to prioritize it based on the information above.
Which provider's version definitely don't have this issue?
I was faced the same Lifecycle configuration for S3 Bucket failing with time out issue for AWS Provider
error at the following versions:
Update: this issue happens intermittently
I wonder how many people facing this issue are accidentally creating duplicate lifecycle resources like this: https://stackoverflow.com/questions/75675077/terraform-timeout-error-when-trying-to-create-multiple-lifecycle-rules-on-an-s3
I was getting this error while testing something with a custom module in a way that it was not immediately apparent that I was accidentally creating and attaching a second aws_s3_bucket_lifecycle_configuration
resource. Only one is supported, so they conflict and cause the API to throw this error.
We seem to be running into this error intermittently, using AWS provider version 5.25.0
and terraform version 1.4.6
. We've recently added a aws_s3_bucket_lifecycle_configuration
resource with a for_each
, such that it adds a single configuration for each separate bucket. When we have the error, a retry seems to fix it.
I wonder if it's maybe just sometimes taking 4+ minutes instead of the default 3 minute timeout. If so, a configurable timeout could be nice
I have this problem and I have an especially long timeout due to other resources. All I can say is that I don't believe that the lifecycle configuration change will ever complete when it gets into this state. Also retrying didn't work for me, I've had to delete the bucket to make progress.
Error: error waiting for S3 Lifecycle Configuration for bucket (xx-xxx-xx-xx-bucket) to reach expected rules status after update: timeout while waiting for state to become 'READY' (last state: 'NOT_READY', timeout: 3m0s)
AWS Provider: 4.21.0
The terraform apply is failing with the above error and it gets resolved on rerun.
Bucket Configuration