hashicorp / terraform-provider-aws

The AWS Provider enables Terraform to manage AWS resources.
https://registry.terraform.io/providers/hashicorp/aws
Mozilla Public License 2.0
9.74k stars 9.1k forks source link

Lifecycle configuration for S3 Bucket failing with time out issue for AWS Provider 4.21.0 #25939

Open rishabhToshniwal opened 2 years ago

rishabhToshniwal commented 2 years ago

Error: error waiting for S3 Lifecycle Configuration for bucket (xx-xxx-xx-xx-bucket) to reach expected rules status after update: timeout while waiting for state to become 'READY' (last state: 'NOT_READY', timeout: 3m0s)

AWS Provider: 4.21.0

The terraform apply is failing with the above error and it gets resolved on rerun.

Bucket Configuration

resource "aws_s3_bucket" "exporter_url" {
  bucket_prefix        = "xx-"
  force_destroy = true
  tags = var.common_tags
}
resource "aws_s3_bucket_versioning" "exporter_url" {
  bucket = aws_s3_bucket.exporter_url.id
  versioning_configuration {
    status = "Enabled"
  }
}
resource "aws_s3_bucket_lifecycle_configuration" "exporter_url" {
  bucket = aws_s3_bucket.exporter_url.id
  rule {
    id     = "noncurrent-version-expiration-object-delete-marker"
    status = "Enabled"
    filter {
      prefix = "/"
    }
    noncurrent_version_expiration {
    
      noncurrent_days = var.s3_lifecycle_noncurrent_version_expiration
    }
    expiration {
      expired_object_delete_marker = true
    } 
  }
}
harshitp1987 commented 2 years ago

Facing same issue after moving to latest provider and defining aws_s3_bucket_lifecycle_configuration as a separate resource

justinretzolk commented 2 years ago

Hey @rishabhToshniwal 👋 Thank you for taking the time to raise this! So that we have all of the necessary information in order to look into this, can you update the issue description to include all of the information requested in the bug report template?

rishabhToshniwal commented 2 years ago

Hi justinretzolk, Thanks For the reply. As asked please find the relevant details below

name: 🐛 Bug Report about: If something isn't working as expected 🤔.


Community Note

Terraform CLI and Terraform AWS Provider Version

Affected Resource(s)

Terraform Configuration Files

Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.

resource "aws_s3_bucket" "server-side-logging-bucket" { bucket = "${var.env_name}-server-side-logs" force_destroy = true tags = var.common_tags provisioner "local-exec" { when = destroy interpreter = ["python3", "-c"] command = <<EOT

!/usr/bin/env python3

import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('${self.id}')
bucket.object_versions.all().delete() EOT on_failure = continue } }

resource "aws_s3_bucket_acl" "server-side-logging-bucket" { bucket = aws_s3_bucket.server-side-logging-bucket.id acl = "log-delivery-write" }

resource "aws_s3_bucket_versioning" "server-side-logging-bucket" { bucket = aws_s3_bucket.server-side-logging-bucket.id versioning_configuration { status = "Enabled" } } resource "aws_s3_bucket_lifecycle_configuration" "server-side-logging-bucket" { bucket = aws_s3_bucket.server-side-logging-bucket.id rule { id = "noncurrent-version-expiration-object-delete-marker" status = "Enabled" filter { prefix = "/" } noncurrent_version_expiration {

days = "${var.s3_lifecycle_noncurrent_version_expiration}"

  noncurrent_days = "${var.s3_lifecycle_noncurrent_version_expiration}"
 }
 expiration {
   expired_object_delete_marker = true
 }  

} }

resource "aws_s3_bucket_server_side_encryption_configuration" "server-side-logging-bucket" { bucket = aws_s3_bucket.server-side-logging-bucket.id rule { apply_server_side_encryption_by_default { sse_algorithm = can(regex("^arn:aws:kms:",var.encrypt_at_rest_kms_key_id))?"aws:kms":"AES256" kms_master_key_id = can(regex("^arn:aws:kms:",var.encrypt_at_rest_kms_key_id))?var.encrypt_at_rest_kms_key_id:null } } }

Debug Output

Error: error waiting for S3 Lifecycle Configuration for bucket (test500-uat-server-side-logs) to reach expected rules status after update: timeout while waiting for state to become 'READY' (last state: 'NOT_READY', timeout: 3m0s)

[2022-07-24T10:59:09.421Z] [Error][ms360-logging-tf-module]

[2022-07-24T10:59:09.421Z] [Error][ms360-logging-tf-module] with aws_s3_bucket_lifecycle_configuration.server-side-logging-bucket,

[2022-07-24T10:59:09.421Z] [Error][ms360-logging-tf-module] on uploads3.tf line 30, in resource "aws_s3_bucket_lifecycle_configuration" "server-side-logging-bucket":

[2022-07-24T10:59:09.421Z] [Error][ms360-logging-tf-module] 30: resource "aws_s3_bucket_lifecycle_configuration" "server-side-logging-bucket" {

[2022-07-24T10:59:09.421Z] [Error][ms360-logging-tf-module]

[2022-07-24T10:59:09.972Z] [Ouptut][ms360-logging-tf-module] Releasing state lock. This may take a few moments..

Panic Output

Expected Behavior

It should not fail in timeout during Lifecycle Configuration for S3 bucket.

Actual Behavior

Error:

Error: error waiting for S3 Lifecycle Configuration for bucket (test500-uat-server-side-logs) to reach expected rules status after update: timeout while waiting for state to become 'READY' (last state: 'NOT_READY', timeout: 3m0s)

Steps to Reproduce

The issue occurs all the time in all the S3 buckets we are creating. On rerunning, the issue gets resolved.

  1. terraform apply

Important Factoids

aws provider version: 4.21.0

terraform version : 1.2.5

terragrunt version : 0.38.5

EC2 Linux 2

References

rishabhToshniwal commented 2 years ago

@justinretzolk any update regarding the above issue raised

michalip commented 1 year ago

@justinretzolk @rishabhToshniwal faced the same issue today. Used lifecycle configuration found here https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_lifecycle_configuration

Sometimes it actually created all the resources but upon destroying and rerunning again it timeout.

Any updates ?

justinretzolk commented 1 year ago

Hey all 👋 Thank you for checking in on this. Unfortunately, I'm not able to provide an estimate on when this will be looked into due to the potential of shifting priorities (we prioritize work by count of ":+1:" reactions, as well as a few other things). For more information on how we prioritize, check out out prioritization guide.

I am, however, going to update the tags here, in hopes that someone from the community will pick this up if the team isn't able to prioritize it based on the information above.

viatcheslavmogilevsky commented 1 year ago

Which provider's version definitely don't have this issue?

I was faced the same Lifecycle configuration for S3 Bucket failing with time out issue for AWS Provider error at the following versions:

Update: this issue happens intermittently

agloyd01 commented 1 year ago

I wonder how many people facing this issue are accidentally creating duplicate lifecycle resources like this: https://stackoverflow.com/questions/75675077/terraform-timeout-error-when-trying-to-create-multiple-lifecycle-rules-on-an-s3

I was getting this error while testing something with a custom module in a way that it was not immediately apparent that I was accidentally creating and attaching a second aws_s3_bucket_lifecycle_configuration resource. Only one is supported, so they conflict and cause the API to throw this error.

dylanlan commented 10 months ago

We seem to be running into this error intermittently, using AWS provider version 5.25.0 and terraform version 1.4.6. We've recently added a aws_s3_bucket_lifecycle_configuration resource with a for_each, such that it adds a single configuration for each separate bucket. When we have the error, a retry seems to fix it.

I wonder if it's maybe just sometimes taking 4+ minutes instead of the default 3 minute timeout. If so, a configurable timeout could be nice

richardj-bsquare commented 9 months ago

I have this problem and I have an especially long timeout due to other resources. All I can say is that I don't believe that the lifecycle configuration change will ever complete when it gets into this state. Also retrying didn't work for me, I've had to delete the bucket to make progress.