pulumi / pulumi-aws

An Amazon Web Services (AWS) Pulumi resource package, providing multi-language access to AWS
Apache License 2.0
466 stars 157 forks source link

BucketLifecycleConfigurationV2 is detected as requiring an update even though nothing changed #2128

Open spock-yh opened 2 years ago

spock-yh commented 2 years ago

What happened?

I am creating a BucketLifecycleConfigurationV2 resource with a set of fixes rules, and attaching it to a bucket. For some reason whenever I run pulumi up it detects the resource as requiring an update due to the rules property being modified, even though it isn't.

Steps to reproduce

Here is the code snippet creating the resource. The logsBucket resource is an aws.s3.Bucket resource previously created, and it is not updated or recreated in these subsequent pulumi up runs. I do have additional resources in the project that are being created or update in these runs, but the lifecycle configuration isn't being changed.

new aws.s3.BucketLifecycleConfigurationV2('website-logs-lifecycle', {
    bucket: logsBucket.bucket,
    rules: [
      {
        id: 'IA-30d_GlacierIR-90d_expire-1y',
        status: 'Enabled',
        abortIncompleteMultipartUpload: {
          daysAfterInitiation: 30,
        },
        expiration: {
          days: 365,
          expiredObjectDeleteMarker: true,
        },
        noncurrentVersionExpiration: {
          noncurrentDays: 365,
        },
        transitions: [
          {
            days: 30,
            storageClass: 'STANDARD_IA',
          },
          {
            days: 90,
            storageClass: 'GLACIER_IR',
          },
        ],
        noncurrentVersionTransitions: [
          {
            noncurrentDays: 30,
            storageClass: 'STANDARD_IA',
          },
          {
            noncurrentDays: 90,
            storageClass: 'GLACIER_IR',
          },
        ],
      },
    ],
  });

Expected Behavior

The lifecycle configuration resource should not be updated if there is no change to the rules or attached bucket.

Actual Behavior

The lifecycle configuration is marked for update due to diff in the rules ([diff: ~rules])

Output of pulumi about

CLI Version 3.39.1 Go Version go1.19 Go Compiler gc

Plugins NAME VERSION aws 5.13.0 docker 3.4.1 nodejs unknown

Host OS Microsoft Windows 11 Home Version 10.0.22000 Build 22000 Arch x86_64

This project is written in nodejs: executable='C:\Program Files\nodejs\node.exe' version='v16.17.0'

Current Stack: dev

TYPE URN pulumi:pulumi:Stack urn:pulumi:dev::website::pulumi:pulumi:Stack::website-dev pulumi:providers:aws urn:pulumi:dev::website::pulumi:providers:aws::default_5_13_0 aws:s3/bucketV2:BucketV2 urn:pulumi:dev::website::aws:s3/bucketV2:BucketV2::website-logs aws:s3/bucketLifecycleConfigurationV2:BucketLifecycleConfigurationV2 urn:pulumi:dev::website::aws:s3/bucketLifecycleConfigurationV2:BucketLifecycleConfigurationV2::website-logs-lifecycle

Found no pending operations associated with dev

Backend Name pulumi.com URL https://app.pulumi.com/spock_abadai User spock_abadai Organizations spock_abadai

Pulumi locates its logs in C:\Users\yhspo\AppData\Local\Temp by default warning: Failed to get information about the Pulumi program's dependencies: Found C:\dev\abadai\website\pulumi\package-lock.json but not npm: unable to find program: npm.exe

Additional context

No response

Contributing

Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

iwahbe commented 2 years ago

Hey @spock-yh, thanks for bringing this to our attention. ~Could you run this again and send us a detailed diff for the update. What command are using to run pulumi, and specifically does it have a --refresh flag?~

iwahbe commented 2 years ago

I'm able to reproduce the issue with only:

import * as aws from "@pulumi/aws";

const logsBucket = new aws.s3.Bucket("i-bucket", {}, {});

new aws.s3.BucketLifecycleConfigurationV2("website-logs-lifecycle", {
  bucket: logsBucket.bucket,
  rules: [
    {
      id: "IA-30d_GlacierIR-90d_expire-1y",
      status: "Enabled",
      expiration: {
        days: 365,
        expiredObjectDeleteMarker: true,
      },
    },
  ],
});
iwahbe commented 2 years ago

@spock-yh It looks like the bridged TF provider doesn't allow specifying both days and expiredObjectDeleteMarker. Doing so results in spurious diffs, as you experienced. The solution is to specify only days or expiredObjectDeleteMarker, but not both. Sorry for the confusion.

viveklak commented 2 years ago

The issue seems to be reported on the upstream Terraform provider as well: https://github.com/hashicorp/terraform-provider-aws/issues/11733. As Ian mentioned, the solution might be to drop one of the conflicting fields. We will track improvements to the new decomposed bucket resources in pulumi/pulumi#11740.

BalzGuenat commented 3 days ago

I believe I am running into the same issue, but I don't think it has to do with expiredObjectDeleteMarker. I'm using this config:

const bucket = new Bucket('test-bucket', { acl: 'private' })
new BucketLifecycleConfigurationV2(`bucket-lifecycleConfig`, {
  bucket: bucket.bucket,
  rules: [
    {
      id: 'delete-after-retention-period',
      status: 'Enabled',
      expiration: { days: 30 },
    },
  ],
})
  1. A pulumi up will create the lifecycle config as expected.
  2. Another pulumi up immediately afterward will show the resources as unchanged.
  3. A pulumi refresh will show differences in lifecycleRules:
    details
    pulumi:pulumi:Stack: (same)
    [urn=urn:pulumi:s3-test-dev::s3-test::pulumi:pulumi:Stack::s3-test-s3-test-dev]
        ~ aws:s3/bucket:Bucket: (update)
            [id=test-bucket-f0202e6]
            [urn=urn:pulumi:s3-test-dev::s3-test::S3TestStackComponent$aws:s3/bucket:Bucket::test-bucket]
            [provider=urn:pulumi:s3-test-dev::s3-test::pulumi:providers:aws::default_6_43_0::4cb30717-07a6-4cf3-975c-1b6672c1b046]
          ~ lifecycleRules: [
              + [0]: {
                      + abortIncompleteMultipartUploadDays: 0
                      + enabled                           : true
                      + expiration                        : {
                          + date                     : ""
                          + days                     : 30
                          + expiredObjectDeleteMarker: false
                        }
                      + id                                : "delete-after-retention-period"
                      + noncurrentVersionExpiration       : <null>
                      + noncurrentVersionTransitions      : []
                      + prefix                            : ""
                      + tags                              : {}
                      + transitions                       : []
                    }
            ]
  4. Going through with the refresh and then doing pulumi up will update the lifecycleRules, which results in the lifecycle rule actually being deleted.
  5. Another pulumi up immediately afterward will show the resources as unchanged.
  6. A pulumi refresh will show the lifecycle rule as deleted.

The rule being deleted means that this bug can result in very sneaky issues!

I also suspect that with multiple buckets, this can result in a situation where no sequence of up and refresh gets you to the desired actual state.

BalzGuenat commented 3 days ago

I've investigated some more and compared the Pulumi state JSON file before and after the refresh. Here it is before the refresh (irrelevant stuff omitted):

{
  "version": 3,
  "checkpoint": {
    "stack": "organization/s3-test/s3-test-dev",
    "latest": {
      "resources": [
        {
          "urn": "urn:pulumi:s3-test-dev::s3-test::S3TestStackComponent$aws:s3/bucket:Bucket::test-bucket",
          "custom": true,
          "id": "test-bucket-284b9b6",
          "type": "aws:s3/bucket:Bucket",
          "inputs": {
            "__defaults": ["bucket", "forceDestroy"],
            "acl": "private",
            "bucket": "test-bucket-284b9b6",
            "forceDestroy": false
          },
          "outputs": {
            "bucket": "test-bucket-284b9b6",
            "lifecycleRules": []
          }
        },
        {
          "urn": "urn:pulumi:s3-test-dev::s3-test::S3TestStackComponent$aws:s3/bucketLifecycleConfigurationV2:BucketLifecycleConfigurationV2::bucket-lifecycleConfig",
          "custom": true,
          "id": "test-bucket-284b9b6",
          "type": "aws:s3/bucketLifecycleConfigurationV2:BucketLifecycleConfigurationV2",
          "inputs": {
            "__defaults": [],
            "bucket": "test-bucket-284b9b6",
            "rules": [
              {
                "__defaults": [],
                "expiration": {
                  "__defaults": [],
                  "days": 30
                },
                "id": "delete-after-retention-period",
                "status": "Enabled"
              }
            ]
          }
        }
      ]
    }
  }
}

and here it is after:

{
  "version": 3,
  "checkpoint": {
    "stack": "organization/s3-test/s3-test-dev",
    "latest": {
      "resources": [
        {
          "urn": "urn:pulumi:s3-test-dev::s3-test::S3TestStackComponent$aws:s3/bucket:Bucket::test-bucket",
          "custom": true,
          "id": "test-bucket-284b9b6",
          "type": "aws:s3/bucket:Bucket",
          "inputs": {
            "__defaults": ["bucket", "forceDestroy"],
            "acl": "private",
            "bucket": "test-bucket-284b9b6",
            "forceDestroy": false
          },
          "outputs": {
            "bucket": "test-bucket-284b9b6",
            "lifecycleRules": [
              {
                "abortIncompleteMultipartUploadDays": 0,
                "enabled": true,
                "expiration": {
                  "date": "",
                  "days": 30,
                  "expiredObjectDeleteMarker": false
                },
                "id": "delete-after-retention-period",
                "noncurrentVersionExpiration": null,
                "noncurrentVersionTransitions": [],
                "prefix": "",
                "tags": {},
                "transitions": []
              }
            ]
          }
        },
        {
          "urn": "urn:pulumi:s3-test-dev::s3-test::S3TestStackComponent$aws:s3/bucketLifecycleConfigurationV2:BucketLifecycleConfigurationV2::bucket-lifecycleConfig",
          "custom": true,
          "id": "test-bucket-284b9b6",
          "type": "aws:s3/bucketLifecycleConfigurationV2:BucketLifecycleConfigurationV2",
          "inputs": {
            "__defaults": [],
            "bucket": "test-bucket-284b9b6",
            "rules": [
              {
                "__defaults": [],
                "expiration": {
                  "__defaults": [],
                  "days": 30
                },
                "id": "delete-after-retention-period",
                "status": "Enabled"
              }
            ]
          }
        }
      ]
    }
  }
}

As you can see, the BucketLifecycleConfigurationV2 resource is present in both but after the refresh, the lifecycle rule also shows up in the bucket config directly under lifecycleRules. This might have to do with a deprecated feature of TerraForm.

From the TerraForm docs:

Currently, changes to the lifecycle_rule configuration of existing resources cannot be automatically detected by Terraform. To manage changes of Lifecycle rules to an S3 bucket, use the aws_s3_bucket_lifecycle_configuration resource instead. If you use lifecycle_rule on an aws_s3_bucket, Terraform will assume management over the full set of Lifecycle rules for the S3 bucket, treating additional Lifecycle rules as drift. For this reason, lifecycle_rule cannot be mixed with the external aws_s3_bucket_lifecycle_configuration resource for a given S3 bucket.

and from further up:

lifecycle_rule - [...] Terraform will only perform drift detection if a configuration value is provided. Use the resource aws_s3_bucket_lifecycle_configuration instead.

It seems Pulumi doesn't exactly follow this behavior but instead always performs drift detection on the lifecycleRules prop of a bucket.

Indeed, lifecycleRules in Pulumi doesn't seem to be deprecated, so one potential workaround is to use that directly instead of BucketLifecycleConfigurationV2.