hashicorp / terraform-provider-aws

The AWS Provider enables Terraform to manage AWS resources.
https://registry.terraform.io/providers/hashicorp/aws
Mozilla Public License 2.0
9.73k stars 9.09k forks source link

Unable to set aws_s3_bucket_lifecycle_configuration with prefix and object_size_greater_than filters #38551

Open pvilasra opened 1 month ago

pvilasra commented 1 month ago

Terraform Core Version

v1.1.2

AWS Provider Version

5.6.2

Affected Resource(s)

aws_s3

Expected Behavior

All object older than once day should get deleted which are present under demo folder but demo folder should not get deleted.

Actual Behavior

Unable to set delete lifecycle configuration with two filters object_size_greater_than = 0 and prefix = "demo/"

Relevant Error/Panic Output Snippet

aws_s3_bucket_lifecycle_configuration.lifecycle_test_bucket: Modifying... [id=lifecycle-test-bucket3210912]
╷
│ Error: updating S3 Bucket Lifecycle Configuration (lifecycle-test-bucket3210912): MalformedXML: The XML you provided was not well-formed or did not validate against our published schema
│       status code: 400, request id: 6WHRQNV7F8TZSYVH, host id: AKwJJoPAsonTTVoPc49N3V/hLT8qn3zn150VgsV0rWi48a9ldogrf/Suz/tqblir9raYA/QCW9U=
│
│   with aws_s3_bucket_lifecycle_configuration.lifecycle_test_bucket,
│   on main.tf line 412, in resource "aws_s3_bucket_lifecycle_configuration" "lifecycle_test_bucket":
│  412: resource "aws_s3_bucket_lifecycle_configuration" "lifecycle_test_bucket" {
│

Terraform Configuration Files

resource "aws_s3_bucket" "lifecycle_test_bucket" {
  bucket = "lifecycle-test-bucket3210912"
}
resource "aws_s3_bucket_lifecycle_configuration" "lifecycle_test_bucket" {
  bucket = aws_s3_bucket.lifecycle_test_bucket.id

  rule {
    id = "rule-1"

    filter {
      and {
        object_size_greater_than = 0
        prefix                   = "demo/"
        # object_size_less_than    = 1
      }
    }

    # ... other transition/expiration actions ...

    status = "Enabled"
    expiration {
      days = 1
    }
  }
}

Steps to Reproduce

I am using terraform version v1.1.2 AWS provider version 5.6.2

Debug Output

$ terraform apply
aws_s3_bucket.lifecycle_test_bucket: Refreshing state... [id=lifecycle-test-bucket3210912]
aws_s3_bucket.ar: Refreshing state... [id=ar-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket.image_recognition: Refreshing state... [id=image-recognition-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket.deploy: Refreshing state... [id=deploy-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket.semantic_search: Refreshing state... [id=semantic-search-data-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket.prometheus: Refreshing state... [id=prometheus-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket_policy.image_recognition: Refreshing state... [id=image-recognition-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket_acl.image_recognition: Refreshing state... [id=image-recognition-eu-central-1-qa-alto-platform-ai,private]
aws_s3_bucket_acl.deploy: Refreshing state... [id=deploy-eu-central-1-qa-alto-platform-ai,private]
aws_s3_bucket_policy.deploy: Refreshing state... [id=deploy-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket_policy.semantic_search: Refreshing state... [id=semantic-search-data-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket_acl.semantic_search: Refreshing state... [id=semantic-search-data-eu-central-1-qa-alto-platform-ai,private]
aws_s3_bucket_acl.prometheus: Refreshing state... [id=prometheus-eu-central-1-qa-alto-platform-ai,private]
aws_s3_bucket_policy.prometheus: Refreshing state... [id=prometheus-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket_lifecycle_configuration.prometheus: Refreshing state... [id=prometheus-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket_policy.ar: Refreshing state... [id=ar-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket_acl.ar: Refreshing state... [id=ar-eu-central-1-qa-alto-platform-ai,private]
aws_s3_bucket_cors_configuration.ar: Refreshing state... [id=ar-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket_versioning.ar: Refreshing state... [id=ar-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket_lifecycle_configuration.lifecycle_test_bucket: Refreshing state... [id=lifecycle-test-bucket3210912]
aws_s3_bucket_lifecycle_configuration.ar: Refreshing state... [id=ar-eu-central-1-qa-alto-platform-ai]

Note: Objects have changed outside of Terraform

Terraform detected the following changes made outside of Terraform since the last "terraform apply":

  # aws_s3_bucket_lifecycle_configuration.lifecycle_test_bucket has changed
  ~ resource "aws_s3_bucket_lifecycle_configuration" "lifecycle_test_bucket" {
        id     = "lifecycle-test-bucket3210912"
        # (1 unchanged attribute hidden)

      ~ rule {
          ~ id     = "rule-1" -> "myrule-1"
            # (1 unchanged attribute hidden)

          ~ filter {
              + object_size_greater_than = "0"

              - and {
                  - object_size_greater_than = 0 -> null
                  - object_size_less_than    = 0 -> null
                  - prefix                   = "demo/" -> null
                }
            }
            # (1 unchanged block hidden)
        }
    }

Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following plan may include       
actions to undo or respond to these changes.

──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── 

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # aws_s3_bucket_lifecycle_configuration.lifecycle_test_bucket will be updated in-place
  ~ resource "aws_s3_bucket_lifecycle_configuration" "lifecycle_test_bucket" {
        id     = "lifecycle-test-bucket3210912"
        # (1 unchanged attribute hidden)

      ~ rule {
          ~ id     = "myrule-1" -> "rule-1"
            # (1 unchanged attribute hidden)

          ~ filter {
              - object_size_greater_than = "0" -> null

              + and {
                  + object_size_greater_than = 0
                  + prefix                   = "demo/"
                }
            }
            # (1 unchanged block hidden)
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

aws_s3_bucket_lifecycle_configuration.lifecycle_test_bucket: Modifying... [id=lifecycle-test-bucket3210912]
╷
│ Error: updating S3 Bucket Lifecycle Configuration (lifecycle-test-bucket3210912): MalformedXML: The XML you provided was not well-formed or did not validate against our published schema
│       status code: 400, request id: 6WHRQNV7F8TZSYVH, host id: AKwJJoPAsonTTVoPc49N3V/hLT8qn3zn150VgsV0rWi48a9ldogrf/Suz/tqblir9raYA/QCW9U=
│
│   with aws_s3_bucket_lifecycle_configuration.lifecycle_test_bucket,
│   on main.tf line 412, in resource "aws_s3_bucket_lifecycle_configuration" "lifecycle_test_bucket":
│  412: resource "aws_s3_bucket_lifecycle_configuration" "lifecycle_test_bucket" {
│

Panic Output

$ terraform apply
aws_s3_bucket.lifecycle_test_bucket: Refreshing state... [id=lifecycle-test-bucket3210912]
aws_s3_bucket.ar: Refreshing state... [id=ar-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket.image_recognition: Refreshing state... [id=image-recognition-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket.deploy: Refreshing state... [id=deploy-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket.semantic_search: Refreshing state... [id=semantic-search-data-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket.prometheus: Refreshing state... [id=prometheus-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket_policy.image_recognition: Refreshing state... [id=image-recognition-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket_acl.image_recognition: Refreshing state... [id=image-recognition-eu-central-1-qa-alto-platform-ai,private]
aws_s3_bucket_acl.deploy: Refreshing state... [id=deploy-eu-central-1-qa-alto-platform-ai,private]
aws_s3_bucket_policy.deploy: Refreshing state... [id=deploy-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket_policy.semantic_search: Refreshing state... [id=semantic-search-data-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket_acl.semantic_search: Refreshing state... [id=semantic-search-data-eu-central-1-qa-alto-platform-ai,private]
aws_s3_bucket_acl.prometheus: Refreshing state... [id=prometheus-eu-central-1-qa-alto-platform-ai,private]
aws_s3_bucket_policy.prometheus: Refreshing state... [id=prometheus-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket_lifecycle_configuration.prometheus: Refreshing state... [id=prometheus-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket_policy.ar: Refreshing state... [id=ar-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket_acl.ar: Refreshing state... [id=ar-eu-central-1-qa-alto-platform-ai,private]
aws_s3_bucket_cors_configuration.ar: Refreshing state... [id=ar-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket_versioning.ar: Refreshing state... [id=ar-eu-central-1-qa-alto-platform-ai]
aws_s3_bucket_lifecycle_configuration.lifecycle_test_bucket: Refreshing state... [id=lifecycle-test-bucket3210912]
aws_s3_bucket_lifecycle_configuration.ar: Refreshing state... [id=ar-eu-central-1-qa-alto-platform-ai]

Note: Objects have changed outside of Terraform

Terraform detected the following changes made outside of Terraform since the last "terraform apply":

  # aws_s3_bucket_lifecycle_configuration.lifecycle_test_bucket has changed
  ~ resource "aws_s3_bucket_lifecycle_configuration" "lifecycle_test_bucket" {
        id     = "lifecycle-test-bucket3210912"
        # (1 unchanged attribute hidden)

      ~ rule {
          ~ id     = "rule-1" -> "myrule-1"
            # (1 unchanged attribute hidden)

          ~ filter {
              + object_size_greater_than = "0"

              - and {
                  - object_size_greater_than = 0 -> null
                  - object_size_less_than    = 0 -> null
                  - prefix                   = "demo/" -> null
                }
            }
            # (1 unchanged block hidden)
        }
    }

Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following plan may include       
actions to undo or respond to these changes.

──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── 

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # aws_s3_bucket_lifecycle_configuration.lifecycle_test_bucket will be updated in-place
  ~ resource "aws_s3_bucket_lifecycle_configuration" "lifecycle_test_bucket" {
        id     = "lifecycle-test-bucket3210912"
        # (1 unchanged attribute hidden)

      ~ rule {
          ~ id     = "myrule-1" -> "rule-1"
            # (1 unchanged attribute hidden)

          ~ filter {
              - object_size_greater_than = "0" -> null

              + and {
                  + object_size_greater_than = 0
                  + prefix                   = "demo/"
                }
            }
            # (1 unchanged block hidden)
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

aws_s3_bucket_lifecycle_configuration.lifecycle_test_bucket: Modifying... [id=lifecycle-test-bucket3210912]
╷
│ Error: updating S3 Bucket Lifecycle Configuration (lifecycle-test-bucket3210912): MalformedXML: The XML you provided was not well-formed or did not validate against our published schema
│       status code: 400, request id: 6WHRQNV7F8TZSYVH, host id: AKwJJoPAsonTTVoPc49N3V/hLT8qn3zn150VgsV0rWi48a9ldogrf/Suz/tqblir9raYA/QCW9U=
│
│   with aws_s3_bucket_lifecycle_configuration.lifecycle_test_bucket,
│   on main.tf line 412, in resource "aws_s3_bucket_lifecycle_configuration" "lifecycle_test_bucket":
│  412: resource "aws_s3_bucket_lifecycle_configuration" "lifecycle_test_bucket" {
│

Important Factoids

No response

References

No response

Would you like to implement a fix?

None

github-actions[bot] commented 1 month ago

Community Note

Voting for Prioritization

Volunteering to Work on This Issue

pravinkhot123 commented 1 month ago

Could you please help me ?

rvoh-tismith commented 1 month ago

@pvilasra As an alternative, are you able to change your rule to drop the object_size_greater_than (which would also allow you to drop the and block)? So, like this:

filter {
  prefix = "demo/"
}

I've been working with this resource recently and have noticed some funky behavior around some of the fields within and, but for me this change works in local tests. It should be equivalent to what you have above. If I recall from Cloudtrail logs, I don't even think object_size_greater_than gets set in the actual request unless it is greater than 0, but I might be misremembering that.

A weird note about the and block and prefix -- it seems that if you only use prefix within and without specifying anything else, you'll get a malformed xml error as well. Kindof weird. Seems like a bug in the AWS APIs. Okay, bizarre, but this behavior seems to be different now. Now it isn't letting you get away with only having one field in the and block, but I swear it was before. Weird. I did open a support case with them yesterday so maybe someone took care of it? lol

However if you really wanted to keep your original example, it also seems to work if you uncomment object_size_less_than and make sure to set it to 1 or above. You could always set it to some insanely high number if you wanted to go that route in order to make sure it applies to every object, but really I would just try the change I mentioned above.

pravinkhot123 commented 1 month ago

Hi @rvoh-tismith

Sorry for delay in response.

Actually, I don't want my S3 bucket lifecycle to delete the entire demo folder that why I am setting object_size_greater_than = 0 bytes and files should be deleted inside demo folder not the entire folder. Alos I have gone through below documentation for that.

https://repost.aws/questions/QU0aFOK4KZQQGos8iTc_W1Yw/s3-object-delete

For now I have applied below lifecycle policy using terraform and minimum object size I have explicitly changed through AWS console. It will be great help if we can achieve both condition using terraform only..

`resource "aws_s3_bucket" "lifecycle_test_bucket" { bucket = "lifecycle-test-bucket3210912" }

resource "aws_s3_bucket_lifecycle_configuration" "lifecycle_test_bucket" { bucket = aws_s3_bucket.lifecycle_test_bucket.id

rule { id = "rule-1" status = "Enabled" expiration { days = 1 } filter { prefix = "demo/" }

... other transition/expiration actions ...

} }

... other transition/expiration actions ...

status = "Enabled"
expiration {
  days = 1
}

} }`

rvoh-tismith commented 1 month ago

@pravinkhot123 Ah, I see. My bad, I thought maybe that was just included from an example or something. I see you're dealing with a bit of an edge case. I'm pretty sure this is because when filter.and.object_size_greater_than is zero it doesn't actually get included in the request, so only filter.and.prefix is getting set, and as I mentioned you're not allowed to only set one field in the and block or you get malformed xml errors. I'm actually working on a PR that should fix this.