Closed bobdoah closed 8 months ago
Thanks for the issue! Your assumption is spot on. To pass the initial validation we need to use the policy that is provided by the provider, then we can only create the restricted bucket policy once we have the workspace ID. Since the prefix of the root bucket uses the workspace ID.
Let me think on this one and ask some folks. The most straightforward answer would be a post deployment step of removing the original bucket policy from the state to avoid the compete in subsequent, but that could introduce it's own complexities.
This has been resolved in the most recent PR. I've added two S3 bucket policies - one for when the restrictive root bucket is enabled and one for when it isn't. When it is, I've added a lifecycle policy to ignore the original policy. I tested it and no change was forced.
Let me know if you see any other issues!
The restrictive root bucket policy (https://github.com/databricks/terraform-databricks-sra/blob/main/aws/tf/modules/sra/data_plane_hardening/restrictive_root_bucket/restrictive_root_bucket.tf) contains several references to the workspace id:
This means it has to be applied after the workspace is created. In order to create the workspace, a bucket policy needs to be applied that will work with validation calls. Currently the default policy is applied: https://github.com/databricks/terraform-databricks-sra/blob/f9bb30810d3e14e3ca0650a20e351ef3ef7a27de/aws/tf/modules/sra/dbfs_s3.tf#L44-L48
As only a single bucket_policy can be set, these resources will compete with each other on subsequent plan/apply cycles.