Open sufiyanghori opened 4 years ago
We are no longer investigating issues reported against Terraform 0.11.x. Terraform 0.12 has been available since May of 2019, and there are really significant benefits to adopting it. We're actively working on Terraform 0.13. I know that adopting 0.12 can require a bit of effort, but it really is worth it, and the upgrade path is pretty well understood in the community by now.
This is otherwise a really well formed issue report, and I appreciate the careful writeup. Can you please try reproducing this issue on 0.12.24?
We are no longer investigating issues reported against Terraform 0.11.x. Terraform 0.12 has been available since May of 2019, and there are really significant benefits to adopting it. We're actively working on Terraform 0.13. I know that adopting 0.12 can require a bit of effort, but it really is worth it, and the upgrade path is pretty well understood in the community by now.
This is otherwise a really well formed issue report, and I appreciate the careful writeup. Can you please try reproducing this issue on 0.12.24?
Thank you daniel for your comment, just tried and I can reproduce it in version 0.12.24
as well.
Thanks for reproducing that on 0.12.24 so quickly, @sufiyanghori! I think this is not a Terraform bug, but rather a configuration mismatch between how you've configured AWS and Terraform.
if I use
backend "s3" {
bucket = "my-terraform-bucket"
key = "my.state.file"
region = "ap-southeast-2"
role_arn = "arn:aws:iam::1234567890:role/MyDevelopmentRole"
}
My expectation is that terraform would try to store state in arn:aws:s3:::arn:aws:s3:::my-terraform-bucket/my.state.file
. The IAM policy you're describing only grants "s3:PutObject"
permissions inside the "arn:aws:s3:::arn:aws:s3:::my-terraform-bucket/env:/development/*"
path. This is also at odds with the s3://my-terraform-bucket/env:/development/my.state.file
path you described.
It looks to me like you have configured Terraform to put state in one place in S3, and configured an IAM policy that does not grant write access to that place.
Based on what you've described, I think you're looking for something more like:
backend "s3" {
bucket = "my-terraform-bucket"
key = "env/development/my.state.file"
region = "ap-southeast-2"
role_arn = "arn:aws:iam::1234567890:role/MyDevelopmentRole"
}
Additionally, I suspect your IAM policy should omit the extra :
in my-terraform-bucket/env:/development/* because that seems like a weird thing to have in an object name.
"Resource": "arn:aws:s3:::arn:aws:s3:::my-terraform-bucket/env:/development/*"
Based on the information you've provided, if I'm understanding this right, I don't immediately see evidence of a bug in Terraform core. It's possible that there's a defect, but there's not enough evidence here for me to treat it as one.
If the suggestions I've provided aren't enough to resolve your issue, I think your best bet is to seek support on the community forum. We use GitHub issues for tracking bugs and enhancements, rather than for questions. While we can sometimes help with certain simple problems here, it's better to use the community forum where there are more people ready to help. The GitHub issues here are monitored only by our few core maintainers.
Can you take another look at those paths and see whether this still looks like a bug to you?
Thank you @danieldreier for your response.
env:
is a prefix added by default,
workspace_key_prefix - (Optional) The prefix applied to the state path inside the bucket. This is only relevant when using a non-default workspace. This defaults to "env:"
Source: https://www.terraform.io/docs/backends/types/s3.html
With following default configuration,
backend "s3" {
bucket = "my-terraform-bucket"
key = "my.state.file"
region = "ap-southeast-2"
role_arn = "arn:aws:iam::1234567890:role/MyDevelopmentRole"
}
If you run terraform workspace new development
, it will create a new workspace with a state file at s3://my-terraform-bucket/env:/development/my.state.file
, not s3://my-terraform-bucket/my.state.file
.
This process at no point creates a statefile at s3://my-terraform-bucket/my.state.file
, but it will check the permission of it, if it exists.
Also terraform recommend to use policies as such,
Terraform will need the following AWS IAM permissions on the target backend bucket:
s3:ListBucket on arn:aws:s3:::mybucket
s3:GetObject on arn:aws:s3:::mybucket/path/to/my/key
s3:PutObject on arn:aws:s3:::mybucket/path/to/my/key
Source: https://www.terraform.io/docs/backends/types/s3.html
ah, thanks. My apologies, I'm a (relatively new) engineering manager on this project and haven't used workspaces together with the S3 backend, so I didn't understand that path. Thanks for clarifying.
I think I'm a bit out of my depth here, and this backend is maintained by the Terraform AWS Provider team, so I'm going to label it for their attention and let them pick up triage.
My team has hit issues with this as well. Where a statefile is accidentally created at the root of the S3 bucket and if there is a newer TF version or a unique provider in that file it will block all of our other projects from running terraform init
. We provide the workspace_key_prefix
on every project.
My team has hit issues with this as well. Where a statefile is accidentally created at the root of the S3 bucket and if there is a newer TF version or a unique provider in that file it will block all of our other projects from running
terraform init
. We provide theworkspace_key_prefix
on every project.
I have updated the ticket with more findings. I have realised that statefile at the root of the bucket is not created accidentally, it is rather created when you switch to default
workspace. This breaks permission checks for other workspaces.
Terraform Version
Details
When a terraform project is initialized for the first time, it makes a recursive
s3/GetObject
call on the parent object in S3 backend bucket as well, if the parent has an object with the same name as the state file (This parent object is automatically created when switched todefault
workspace).For example,
Suppose AWS
Account A
has an S3 backend bucket calledmy-terraform-bucket
. The bucket has a single workspace calleddevelopment
and the state file calledmy.state.file
.So the state file path becomes,
Now, I want to provide role
arn:aws:iam::1234567890:role/MyDevelopmentRole
permission to be able to access the state file. So the corresponding S3 backend configuration becomes,and bucket has following policy attached,
Now if I do
terraform init
, it will work. However, if I switch todefault
workspace i.eterraform12 workspace select default
it will create a new state file at the root of the backend bucket. i.e.The
terraform init
(if the local.terraform
has been removed before running it) will fail, because it will try to execute GetObject ons3://my-terraform-bucket/my.state.file
instead ofs3://my-terraform-bucket/env:/development/my.state.file
. And throws following error,Terraform Configuration Files
Debug Output
Crash Output
Expected Behavior
Terraform should have checked access on
s3://my-terraform-bucket/env:/development/my.state.file
and should have successfully initialized.Actual Behavior
Terraform checks if it can get
s3://my-terraform-bucket/my.state.file
and fails, because bucket policy restricts access to specific key only.Steps to Reproduce
Create a new bucket for S3 backend ,
Add policy so that your role can access only the state file, and nothing else from the parent directory,
Create a new project with following s3 backend config,
Do
terraform init
and create a workspace calleddevelopment
.Delete
.terraform
directory and re-runterraform init
. This will work.Go to your S3 bucket and upload an empty file with the same name as the terraform state file, in the bucket such that the path of the file is
s3://your-bucket-name/my.state.file
Delete
.terraform
directory and re-runterraform init
. This will not work.Additional Context
References