hashicorp / terraform-provider-aws

The AWS Provider enables Terraform to manage AWS resources.
https://registry.terraform.io/providers/hashicorp/aws
Mozilla Public License 2.0
9.74k stars 9.1k forks source link

[Bug]: Data sync Locations and tasks is recreated every time during a terraform apply even in case of no changes #29280

Open Rijula-Balaram opened 1 year ago

Rijula-Balaram commented 1 year ago

Terraform Core Version

1.3.7

AWS Provider Version

4.24.0

Affected Resource(s)

resource "aws_datasync_location_s3" resource "aws_datasync_task"

Expected Behavior

These resources must not be recrated after first apply if there are no changes within the same

Actual Behavior

These resources are recreated for every terraform apply even if there are no changes within the same. Also the ENI associated with the tasks are not getting deleted.

Relevant Error/Panic Output Snippet

resource "aws_datasync_location_s3" "location" {
  s3_bucket_arn = aws_s3_bucket.bucket.arn
  subdirectory  = var.datasync_s3_location_config["subdirectory"]
  s3_config {
    bucket_access_role_arn = aws_iam_role.datasync_access_for_s3_role.arn
  }
  tags = merge(
    var.serviceRelated_tags, {
      Name = var.datasync_s3_location_name
  }, )
}

Terraform Configuration Files

resource "aws_datasync_location_s3" "location" { s3_bucket_arn = aws_s3_bucket.bucket.arn subdirectory = var.datasync_s3_location_config["subdirectory"] s3_config { bucket_access_role_arn = aws_iam_role.datasync_access_for_s3_role.arn } tags = merge( var.serviceRelated_tags, { Name = var.datasync_s3_location_name }, ) }

Steps to Reproduce

deploy the above code and each time the same code written and deployed location gets recreated, also the associated task

Debug Output

No response

Panic Output

No response

Important Factoids

No response

References

No response

Would you like to implement a fix?

None

github-actions[bot] commented 1 year ago

Community Note

Voting for Prioritization

Volunteering to Work on This Issue

Rijula-Balaram commented 1 year ago

Could you please provide an update on this issue

Sighery commented 1 year ago

Running into this as well. I can try to provide a sample configuration after some cleanup. But basically, I have an EFS, S3 bucket, and a Datasync task copying from S3 to EFS. The aws_datasync_location_s3 and aws_datasync_task resources always get replaced for some reason, even if I just applied the changes, and I've touched nothing else since, neither on the AWS console, nor with the infra code.

This is the output I get, with just some IDs redacted:

Terraform will perform the following actions:

  # aws_datasync_location_efs.dags must be replaced
-/+ resource "aws_datasync_location_efs" "dags" {
      ~ arn                 = "arn:aws:datasync:eu-central-1:account1id:location/loc-loc1id" -> (known after apply)
      ~ id                  = "arn:aws:datasync:eu-central-1:account1id:location/loc-loc1id" -> (known after apply)
      - tags                = {} -> null
      ~ uri                 = "efs://eu-central-1.fs-fs1id/" -> (known after apply)
        # (3 unchanged attributes hidden)

      ~ ec2_config {
          ~ subnet_arn          = "arn:aws:ec2:eu-central-1:account1id:subnet/subnet-subnet1id" -> "arn:aws:ec2:eu-central-1:814050314857:subnet/subnet-subnet1id" # forces replacement
            # (1 unchanged attribute hidden)
        }
    }

  # aws_datasync_task.dags must be replaced
-/+ resource "aws_datasync_task" "dags" {
      ~ arn                      = "arn:aws:datasync:eu-central-1:account1id:task/task-task1id" -> (known after apply)
      ~ destination_location_arn = "arn:aws:datasync:eu-central-1:account1id:location/loc-loc1id" # forces replacement -> (known after apply) # forces replacement
      ~ id                       = "arn:aws:datasync:eu-central-1:account1id:task/task-task1id" -> (known after apply)
        name                     = "dags-s3-to-efs"
      - tags                     = {} -> null
        # (2 unchanged attributes hidden)

      - options {
          - atime                          = "BEST_EFFORT" -> null
          - bytes_per_second               = -1 -> null
          - gid                            = "INT_VALUE" -> null
          - log_level                      = "OFF" -> null
          - mtime                          = "PRESERVE" -> null
          - object_tags                    = "PRESERVE" -> null
          - overwrite_mode                 = "ALWAYS" -> null
          - posix_permissions              = "PRESERVE" -> null
          - preserve_deleted_files         = "PRESERVE" -> null
          - preserve_devices               = "NONE" -> null
          - security_descriptor_copy_flags = "NONE" -> null
          - task_queueing                  = "ENABLED" -> null
          - transfer_mode                  = "CHANGED" -> null
          - uid                            = "INT_VALUE" -> null
          - verify_mode                    = "POINT_IN_TIME_CONSISTENT" -> null
        }
    }

Plan: 2 to add, 0 to change, 2 to destroy.

This is the Terraform/provider version in use:

$ terraform -version
Terraform v1.5.0
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v5.6.2
+ provider registry.terraform.io/hashicorp/random v3.5.1
akats-agathos commented 1 year ago

It may help to surround the subdirectory field in aws_datasync_location_s3 with slashes, I.e.

resource "aws_datasync_location_s3" "archive_destination" {
  s3_bucket_arn = aws_s3_bucket.client.arn
  # subdirectory  = "data/archive" # NO
  subdirectory  = "/data/archive/" # YES
srgfrancisco commented 10 months ago

I'm facing the same issue as @Rijula-Balaram and @Sighery. Any updates?