Open ghost opened 4 years ago
I had a similar issue just now, same error message anyway.
We had just upgraded our terraform version from 0.12.24 to 0.12.29. I had previously run terraform init against my module on 0.12.24. I used tfswitch to swap to 0.12.29 and attempted to run terraform init again terraform init -backend-config $BACKEND_CONFIG_BUCKET -backend-config $BACKEND_CONFIG_KEY
and it said:
Error: Error loading state:
BucketRegionError: incorrect region, the bucket is not in 'us-west-2' region at endpoint ''
status code: 301, request id: , host id:
Terraform failed to load the default state from the "s3" backend.
State migration cannot occur unless the state can be loaded. Backend
modification and state migration has been aborted. The state in both the
source and the destination remain unmodified. Please resolve the
above error and try again.
I deleted my .terraform
folder in the module I was running, and then terraform init worked again. May not fix your case, but may help someone else who finds this exact error.
tried that, didn't work for me. hope it will be handy for others. thanks for the reply
Has there been any update here? I see intermittent issues with this, it happens on some TF modules, and some not,
I am seeing the same:
Error: Error putting S3 policy: BucketRegionError: incorrect region, the bucket is not in 'us-east-2' region at endpoint ''
status code: 301, request id: , host id:
using:
variable "s3_bucket_names" {
type = set(string)
default = [
"example-1g4vh851tyz8c",
"example-1qoja52xga3jt",
"example-mcim7rj05iz"
]
}
resource "aws_s3_bucket_policy" "p1" {
for_each = var.s3_bucket_names
bucket = each.key
policy = <<POLICY
{
"Version":"2012-10-17",
"Statement":[
{
"Sid": "ForceSSLOnlyAccess",
"Effect": "Deny",
"Principal": "*",
"Action": "*",
"Resource": "arn:aws:s3:::${each.key}/*",
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
}
]
}
POLICY
}
are you guys sure that you don't have any 'us-east-2' on your code overriding default region? I had the same issue and found a hardcoded region = "us-east-2"
in an output section that I copied from somewhere
FWIW, on 0.14.7
I needed to delete the .terraform
directory. Works.
What I understand is that S3 Buckets are not region independent and they are global like CloudFront. So, using us-east-1
as a region should solve the problem.
Changing the region as @maateen pointed, fixed the error for me!
I had a similar issue just now, same error message anyway. We had just upgraded our terraform version from 0.12.24 to 0.12.29. I had previously run terraform init against my module on 0.12.24. I used tfswitch to swap to 0.12.29 and attempted to run terraform init again
terraform init -backend-config $BACKEND_CONFIG_BUCKET -backend-config $BACKEND_CONFIG_KEY
and it said:Error: Error loading state: BucketRegionError: incorrect region, the bucket is not in 'us-west-2' region at endpoint '' status code: 301, request id: , host id: Terraform failed to load the default state from the "s3" backend. State migration cannot occur unless the state can be loaded. Backend modification and state migration has been aborted. The state in both the source and the destination remain unmodified. Please resolve the above error and try again.
I deleted my
.terraform
folder in the module I was running, and then terraform init worked again. May not fix your case, but may help someone else who finds this exact error.
I did and it works
For me, this was an error with an environment variable setting. Changing
S3_SAMPLE_BUCKET = aws_s3_bucket.s3-sample-bucket.arn
to
S3_SAMPLE_BUCKET = aws_s3_bucket.s3-sample-bucket.bucket
resolved the issue.
I had the same problem: I realized I created the bucket in a different region.
Facing the same issue. Verified provider.tf
, backend.tf
and S3 bucket is using required us-west-1
region but getting same error as BucketRegionError: incorrect region, the bucket is not in 'us-west-1' region at endpoint ''
If you have this issue, it would mean that bucket is not in the Region. The Backend won't init without a bucket already in place 1st.
Buckets are global, they do NOT sit in a region.... Get your facts straight.
Buckets are global, they do NOT sit in a region.... Get your facts straight.
That's wrong, @chris1248 . The region is even visible in the URL, eg. https://bucket-name.s3.eu-west-1.amazonaws.com/test/file.txt
Buckets are global, they do NOT sit in a region.... Get your facts straight.
You think maybe AWS themselves have their facts straight?
I had the same problem. Later, I have found that I've used a wrong bucket name in my backend "s3"
block.
Deleting .terraform and .terraform.lock.hcl worked for me.
In my case, I needed to create a bucket in another region for a DR scenario. Then I found CloudFront has to be created in us-east-1 so I had to switch the main/DR regions. Even though I made a Destroy it seems there were some left records of the previous S3 locations and that was creating the problem. Deleting .terraform worked for me too. Thank you.
Found this thread while hitting the same issue. The issue for us was unrelated to bucket creation. Instead it was due to terraform_remote_state
pointing to the wrong region (we have TF state in S3).
@lenka-cizkova Same with me.
btw. you can also use tenv that support Terraform as well as OpenTofu (and Terragrunt :) ) in one tool. It allow you to simplify version management and can do much more, than tfswitch.
I had this problem using Terragrunt with a multi-region setup, fixed by using the remote state in one specific region and separate state files per region.
backend = "s3"
config = {
region = "eu-central-1"
encrypt = true
bucket = "example-${local.account_id}"
key = "${local.common_vars.locals.project}/${path_relative_to_include()}/terraform.tfstate"
dynamodb_table = "example-tf-lock-${local.account_id}"
Same problem. Apparently you cannot upload a file to a bucket that is in another region that the one configured in the provider which is a very unfortunate limitation if that is the case. We are using a multiple region deployment where we configure the provider but then all of our states and deployment files are in one bucket in one region. This force us to create extra buckets per region.
The same error message popped up for me, then I realised that the bucket name I was using was not unique ... AWS bucket names need to be GLOBALLY unique. I've overlooked this and used the same terraform bucket name for production (in a separate AWS account) as I did for sandbox. Changing the bucket name in the backend
block fixed it for me.
This issue was originally opened by @Eliasi1 as hashicorp/terraform#25782. It was migrated here as a result of the provider split. The original body of the issue is below.
Terraform and other providers Version
Terraform v0.12.29
Affected Resource(s)
Terraform Configuration Files
s3_cluster_files.tf:
provider.tf:
backend.tf:
Output
10: resource "aws_s3_bucket_object" "put_prometheus" {
Expected Behavior
Should put prometheus.yml file in S3 bucket
Actual Behavior
Fails with error BucketRegionError
Steps to Reproduce
Important Factoids
The AWS provider resource is configured to use us-east-2 as that is where we would like to provision. The S3 bucket is in us-east-2 and so the ECS cluster. in fact, the whole project is build in the same region, no other region is mentioned anywhere. I have tried to delete ./terraform folder and recreate. i have looked for a solution over google but no luck.