hashicorp / terraform-provider-aws

The AWS Provider enables Terraform to manage AWS resources.
https://registry.terraform.io/providers/hashicorp/aws
Mozilla Public License 2.0
9.78k stars 9.14k forks source link

[Bug]: aws_s3_bucket data sources cannot be destroyed if the bucket no longer exists #39673

Open josephmidura opened 4 days ago

josephmidura commented 4 days ago

Terraform Core Version

1.5.3

AWS Provider Version

4.67.0

Affected Resource(s)

data.aws_s3_bucket

Expected Behavior

A warning is displayed and terraform destroy continues to remove the resource as normal.

Actual Behavior

The aws_s3_bucket data source is unable to delete during terraform destroy if the associated bucket is already deleted.

Relevant Error/Panic Output Snippet

No response

Terraform Configuration Files

data "aws_s3_bucket" "glue" {
  bucket = "${local.project_stage}-glue" 
}

Steps to Reproduce

The terraform destroy was run on a project. The destroy operations stopped with an error because deletion protection was enabled on an RDS database. After deletion protection was disabled in the console for the RDS database, terraform destroy command was run again.

The aws_s3_bucket data source now references a bucket that no longer exists (was deleted during the initial terraform destroy) and the destroy operation failed with the following error:

Error: Failed getting S3 bucket (bucket-name): NotFound: Not Found

terraform state list does not show the bucket resource.

Debug Output

No response

Panic Output

No response

Important Factoids

terraform installed via asdf

References

No response

Would you like to implement a fix?

None

github-actions[bot] commented 4 days ago

Community Note

Voting for Prioritization

Volunteering to Work on This Issue

justinretzolk commented 3 days ago

Hey @josephmidura 👋 Thank you for taking the time to raise this! Can you give me an idea of what local.project_stage looks like? I'm also curious -- are you creating a bucket using an aws_s3_bucket resource and then reading it using the corresponding data source in the same configuration?

Unfortunately, I don't believe there's a way to skip a data source being read during a destroy (Terraform assumes you need the most up to date information possible), so this might be a bit tricky to get around.

josephmidura commented 3 days ago

Thanks for the reply @justinretzolk. Here is some additional code that helps explain local.project_stage:

variable "project" {
  description = "Project name"
  default     = "name"
}

variable "stage" {
  type        = string
  description = "Stage"
  default     = "prod"
}

locals {
  project_stage = "${var.project}-${var.stage}"
  region        = data.aws_region.current.name
  account_id    = data.aws_caller_identity.current.account_id
}

So the code snippet I included above could be rewritten as the following:

data "aws_s3_bucket" "glue" {
  bucket = "${var.project}-${var.stage}-glue" 
}

or

data "aws_s3_bucket" "glue" {
  bucket = "name-prod-glue" 
}

Yes, I created the name-prod-glue bucket using an aws_s3_bucket resource and the aws_s3_bucket data in the same configuration.

Workaround that was successful in my case

Today, I created the name-prod-glue bucket manually, ran terraform destroy again, and this time all resources deleted, including data.aws_s3_bucket. I'm happy that all resources were cleanly destroyed, but this workaround is not a robust solution. I'll appreciate any insight you can offer.