Open korporationcl opened 2 years ago
Hi @korporationcl , thank you for reporting this issue. Do you mind also providing your provider
configuration block details and/or specific region
or endpoints
settings in use when creating S3 resources to?
Please make sure that in your terraform import
command, you use your actual S3 bucket name and not the config name of your bucket (e.g., "terraform" or "example").
If your v3 config is:
resource "aws_s3_bucket" "example" {
bucket = "marbella-4815162342"
acl = "private"
}
And your v4 config is:
resource "aws_s3_bucket" "example" {
bucket = "marbella-4815162342"
}
resource "aws_s3_bucket_acl" "example" {
bucket = aws_s3_bucket.example.id
acl = "private"
}
The ID for the import command would be marbella-4815162342,private
and not example,private
:
% terraform import aws_s3_bucket_acl.example marbella-4815162342,private
^^^^^^^^^^^^^^^^^^^
The unhelpful BucketRegionError
has to do with how S3 handles errors. example
and terraform
are actual S3 buckets that someone, somewhere owns. But, region errors take precedence over ownership errors.
These are the results of attempting an s3/GetBucketAcl
operation on various buckets:
bucket | do you own? | access endpoint | error |
---|---|---|---|
terraform in us-east-1 |
no | ap-southeast-2 |
BucketRegionError: incorrect region, the bucket is not in 'ap-southeast-2' region at endpoint '', bucket is in 'us-east-1' region |
terraform in us-east-1 |
no | us-east-1 |
AccessDenied: Access Denied |
costadelsol in eu-north-1 |
yes | us-west-2 |
BucketRegionError: incorrect region, the bucket is not in 'us-west-2' region at endpoint '', bucket is in 'eu-north-1' region |
example in us-east-1 |
no | ap-northeast-2 |
BucketRegionError: incorrect region, the bucket is not in 'ap-northeast-2' region at endpoint '', bucket is in 'us-east-1' region |
grogu in us-west-2 |
no | us-east-1 |
AuthorizationHeaderMalformed: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-2' |
wong in ap-south-1 |
no | us-east-1 |
AuthorizationHeaderMalformed: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'ap-south-1' |
taylorswift in us-west-1 |
no | eu-north-1 |
BucketRegionError: incorrect region, the bucket is not in 'eu-north-1' region at endpoint '', bucket is in 'us-west-1' region |
@anGie44 I have the same issue:
$ terraform import module.tfstate_bucket.aws_s3_bucket_versioning.terraform terraform
module.tfstate_bucket.aws_s3_bucket_versioning.terraform: Importing from ID "terraform"...
module.tfstate_bucket.aws_s3_bucket_versioning.terraform: Import prepared!
Prepared aws_s3_bucket_versioning for import
module.tfstate_bucket.aws_s3_bucket_versioning.terraform: Refreshing state... [id=terraform]
â•·
│ Error: error waiting for S3 Bucket Versioning status for bucket (terraform): BucketRegionError: incorrect region, the bucket is not in 'ca-central-1' region at endpoint '', bucket is in 'us-east-1' region
│ status code: 301, request id: [REDACTED], host id: [REDACTED]
I have a provider file (aws.tf
) with the following block:
provider "aws" {
profile = var.profile
region = var.region
}
And my terraform.tfvars
is:
region = "ca-central-1"
profile = "terraformer"
I can confirm that my bucket is, in fact, in ca-central-1
.
I am unable to reproduce this problem. Below I'm including the configuration and commands I used. Let me know if you're doing something different.
I've tried these region/credential setups. And they all work for importing aws_s3_bucket_acl
.
Creds | provider region |
provider profile |
AWS_DEFAULT_REGION |
AWS_PROFILE |
AWS Conf Prof Reg | Shrd Crd Reg |
---|---|---|---|---|---|---|
Static | ap-southeast-2 |
(none) | us-west-2 |
ct |
us-west-2 |
(none) |
Static | (none) | (none) | ap-southeast-2 |
ct |
us-west-2 |
(none) |
STS | ap-southeast-2 |
tf_alt1 |
us-west-2 |
tf_alt1 |
us-west-2 |
(none) |
main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "3.74.2"
}
}
}
provider "aws" {
region = "ap-southeast-2"
}
resource "aws_s3_bucket" "example" {
bucket = "villaviciosa-2697126908"
acl = "private"
}
Console:
% terraform init
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/aws versions matching "3.74.2"...
- Installing hashicorp/aws v3.74.2...
- Installed hashicorp/aws v3.74.2 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
% terraform apply
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
...
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_s3_bucket.example: Creating...
aws_s3_bucket.example: Still creating... [10s elapsed]
aws_s3_bucket.example: Creation complete after 16s [id=villaviciosa-2697126908]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.1.0"
}
}
}
provider "aws" {
region = "ap-southeast-2"
}
resource "aws_s3_bucket" "example" {
bucket = "villaviciosa-2697126908"
}
resource "aws_s3_bucket_acl" "example" {
bucket = aws_s3_bucket.example.id
acl = "private"
}
Console:
% echo $AWS_DEFAULT_REGION
us-west-2
% terraform init
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/aws versions matching "4.1.0"...
- Installing hashicorp/aws v4.1.0...
- Installed hashicorp/aws v4.1.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
% terraform import aws_s3_bucket.example villaviciosa-2697126908
aws_s3_bucket.example: Importing from ID "villaviciosa-2697126908"...
aws_s3_bucket.example: Import prepared!
Prepared aws_s3_bucket for import
aws_s3_bucket.example: Refreshing state... [id=villaviciosa-2697126908]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
% terraform import aws_s3_bucket_acl.example villaviciosa-2697126908,private
aws_s3_bucket_acl.example: Importing from ID "villaviciosa-2697126908,private"...
aws_s3_bucket_acl.example: Import prepared!
Prepared aws_s3_bucket_acl for import
aws_s3_bucket_acl.example: Refreshing state... [id=villaviciosa-2697126908,private]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
% terraform apply
aws_s3_bucket.example: Refreshing state... [id=villaviciosa-2697126908]
aws_s3_bucket_acl.example: Refreshing state... [id=villaviciosa-2697126908,private]
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
@korporationcl @halostatue Since we are having trouble reproducing this issue, can you provide additional information to help us understand what's happening?
export TF_LOG_PATH=terraform.log && export TF_LOG=debug
)~/.aws/credentials
), AWS config (e.g., ~/.aws/config
), environment (e.g., echo $AWS_DEFAULT_REGION
), and provider?My ~/.aws/credentials
sets my terraformer
profile to ca-central-1
. I do not set it in ~/.aws/config
. I have AWS_REGION=ca-central-1
and as I indicated my provider is configured via terraform.tfvars
where profile="ca-central-1"
.
My previous run was trying to import aws_s3_bucket_versioning
, but I get the same thing when trying to import aws_s3_bucket_acl
:
terraform import module.tfstate_bucket.aws_s3_bucket_acl.terraform terraform,private
module.tfstate_bucket.aws_s3_bucket_acl.terraform: Importing from ID "terraform,private"...
module.tfstate_bucket.aws_s3_bucket_acl.terraform: Import prepared!
Prepared aws_s3_bucket_acl for import
module.tfstate_bucket.aws_s3_bucket_acl.terraform: Refreshing state... [id=terraform,private]
â•·
│ Error: error getting S3 bucket ACL (terraform,private): BucketRegionError: incorrect region, the bucket is not in 'ca-central-1' region at endpoint '', bucket is in 'us-east-1' region
│ status code: 301, request id: [REDACTED], host id: [REDACTED]
│
│
╵
I’m attaching a copy of terraform.log
as instructed above. I have redacted identifying information from it (it would be good if this could be done automatically so that keys, account numbers, etc. aren’t exposed).
My terraformer user is fairly locked down only to those things which I have so far identified as required.
@halostatue Thank you for the information. We will look through the debug logs.
Using a similar configuration to you, I'm still unable to reproduce the issue. We must be missing a detail somewhere. Please look over how I've configured in an attempt to reproduce and see how your set up varies so I can hopefully reproduce the problem.
Note: There are 3 differences from your setup 1) region ap-southeast-2
, 2) no variables to set region/profile, and 3) not using a module. However, the op uses ap-southeast-2
, does not appear to be using variables to set the region and profile, nor using modules.
% env | grep AWS
AWS_REGION=ap-southeast-2
% cat ~/.aws/config
cat: ~/.aws/config: No such file or directory
% cat ~/.aws/credentials
[ct]
aws_access_key_id = << redacted >>
aws_secret_access_key = << redacted >>
region=ap-southeast-2
And then in my Terraform configuration:
provider "aws" {
region = "ap-southeast-2"
profile = "ct"
}
And the result:
% terraform import aws_s3_bucket_acl.example villaviciosa-2697126908,private
aws_s3_bucket_acl.example: Importing from ID "villaviciosa-2697126908,private"...
aws_s3_bucket_acl.example: Import prepared!
Prepared aws_s3_bucket_acl for import
aws_s3_bucket_acl.example: Refreshing state... [id=villaviciosa-2697126908,private]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
This seems to be very similar to your setup. Let me know if there are differences.
$ env | grep AWS
AWS_REGION=ca-central-1
AWS_PROFILE=terraformer
$ cat ~/.aws/config
[preview]
cloudfront = true
[default]
cli_history = enabled
$ ag '^\[terraformer\]' -A5 ~/.aws/credentials
159:[terraformer]
160-# terraformer = halostatue-infrastructure
161-aws_access_key_id = <redacted>
162-aws_secret_access_key = <redacted>
163-region = ca-central-1
164-
$ cat aws.tf
provider "aws" {
profile = "terraformer"
region = "ca-central-1"
}
$ terraform import module.tfstate_bucket.aws_s3_bucket_acl.terraform terraform,private
module.tfstate_bucket.aws_s3_bucket_acl.terraform: Importing from ID "terraform,private"...
module.tfstate_bucket.aws_s3_bucket_acl.terraform: Import prepared!
Prepared aws_s3_bucket_acl for import
module.tfstate_bucket.aws_s3_bucket_acl.terraform: Refreshing state... [id=terraform,private]
â•·
│ Error: error getting S3 bucket ACL (terraform,private): BucketRegionError: incorrect region, the bucket is not in 'ca-central-1' region at endpoint '', bucket is in 'us-east-1' region
│ status code: 301, request id: <redacted>, host id: <redacted>
│
│
╵
For simplicity in this case, I modified aws.tf
to include the profile and region as static strings rather than variables.
@halostatue Is it possible this bucket is old? Can you create a brand new bucket and attempt to import it? The comment above gives you configuration to test out a new bucket.
Also, just to make sure there aren't differences between the AWS Console and CLI, can you run this command on the bucket?
% aws s3api get-bucket-location --bucket villaviciosa-2697126908
{
"LocationConstraint": "ap-southeast-2"
}
According to the debug log, AWS seems to think your bucket moved to us-east-1
:
---[ RESPONSE ]--------------------------------------
HTTP/1.1 301 Moved Permanently
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Wed, 16 Feb 2022 19:05:48 GMT
Location: https://amazonaws.com/badhttpredirectlocation
Server: AmazonS3
X-Amz-Bucket-Region: us-east-1
X-Amz-Id-2: ...
X-Amz-Request-Id: ...
sorry, It seems like we are all in different time zones but here is my provider configuration:
provider "aws" {
region = "ap-southeast-2"
}
terraform {
required_version = ">= 1.1.4"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
my default region is set to eu-central-1
(we deploy in several regions/accounts):
this is part of my AWS CLI configuration, in particular ~/.aws/config
[default]
region=eu-central-1
cli_pager=
I did run the command before as well, thinking that AWS could be telling me the truth about the region but is actually set correctly (not in Korea):
aws s3api get-bucket-location --bucket example-bucket-name
{
"LocationConstraint": "ap-southeast-2"
}
AWS_DEFAULT_REGION is not set on my environment (in fact, I don't have anything set as a environment variable)
echo $AWS_DEFAULT_REGION
I have attached a super filtered version of my output with the debug enabled. terraform.log
This bucket has been created about 2 years ago so depends, it's old but not that old IMHO (April, 2020).
It looks like AWS is responding very similarly to both of you. Key parts of debug:
ap-southeast-2
/ ap-northeast-2
2022-02-17T09:25:24.940+1100 [DEBUG] provider.terraform-provider-aws_v4.1.0_x5:
[aws-sdk-go] DEBUG: Request s3/GetBucketAcl Details:
---[ REQUEST POST-SIGN ]-----------------------------
GET /?acl= HTTP/1.1
Host: example-name.s3.ap-southeast-2.amazonaws.com
User-Agent: APN/1.0 HashiCorp/1.0 Terraform/1.1.5 (+https://www.terraform.io) terraform-provider-aws/4.1.0 (+https://registry.terraform.io/providers/hashicorp/aws) aws-sdk-go/1.42.53 (go1.17.6; darwin; amd64)
Authorization: AWS4-HMAC-SHA256 Credential=AKIAUBGCRZPA4G6VZ4SH/20220216/ap-southeast-2/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=462e8e18162cea7820cf52f01fff4c028d7d83bab175259a7fa64bd2bffa2e59
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20220216T222524Z
Accept-Encoding: gzip
-----------------------------------------------------: timestamp=2022-02-17T09:25:24.940+1100
2022-02-17T09:25:25.101+1100 [DEBUG] provider.terraform-provider-aws_v4.1.0_x5:
[aws-sdk-go] DEBUG: Response s3/GetBucketAcl Details:
---[ RESPONSE ]--------------------------------------
HTTP/1.1 301 Moved Permanently
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Wed, 16 Feb 2022 22:25:24 GMT
Location: https://amazonaws.com/badhttpredirectlocation
Server: AmazonS3
X-Amz-Bucket-Region: ap-northeast-2
X-Amz-Id-2: sKkLaJ/+gA+ZSTzKa2tRS6MG1yXNIhARNU14XOMy40ifg3dqklKm4Eujq5/5wCYH9WG07QJB6To=
X-Amz-Request-Id: N747VNKERTB0RS3S
-----------------------------------------------------: timestamp=2022-02-17T09:25:25.101+1100
2022-02-17T09:25:25.101+1100 [DEBUG] provider.terraform-provider-aws_v4.1.0_x5: [aws-sdk-go] <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>PermanentRedirect</Code>
<Message>The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.</Message>
<Endpoint>example-name.s3.ap-northeast-2.amazonaws.com</Endpoint>
<Bucket>example-name</Bucket><RequestId>N747VNKERTB0RS3S</RequestId><HostId>sKkLaJ/+gA+ZSTzKa2tRS6MG1yXNIhARNU14XOMy40ifg3dqklKm4Eujq5/5wCYH9WG07QJB6To=</HostId></Error>: timestamp=2022-02-17T09:25:25.101+1100
2022-02-17T09:25:25.102+1100 [DEBUG] provider.terraform-provider-aws_v4.1.0_x5:
[aws-sdk-go] DEBUG: Validate Response s3/GetBucketAcl failed, attempt 0/25,
error BucketRegionError: incorrect region, the bucket is not in 'ap-southeast-2' region at endpoint '', bucket is in 'ap-northeast-2' region
ca-central-1
/ us-east-1
2022-02-16T14:05:48.109-0500 [DEBUG] provider.terraform-provider-aws_v4.1.0_x5:
[aws-sdk-go] DEBUG: Request s3/GetBucketAcl Details:
---[ REQUEST POST-SIGN ]-----------------------------
GET /?acl= HTTP/1.1
Host: terraform.s3.ca-central-1.amazonaws.com
User-Agent: APN/1.0 HashiCorp/1.0 Terraform/1.1.5 (+https://www.terraform.io) terraform-provider-aws/4.1.0 (+https://registry.terraform.io/providers/hashicorp/aws) aws-sdk-go/1.42.53 (go1.17.6; darwin; arm64)
Authorization: AWS4-HMAC-SHA256 Credential=[REDACTED]/20220216/ca-central-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=ffc4b0992ecd5fd940f8808addab96ede05dd75a4b5f33d58fb39e8ff6ff87b9
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20220216T190548Z
Accept-Encoding: gzip
-----------------------------------------------------: timestamp=2022-02-16T14:05:48.109-0500
2022-02-16T14:05:48.223-0500 [DEBUG] provider.terraform-provider-aws_v4.1.0_x5:
[aws-sdk-go] DEBUG: Response s3/GetBucketAcl Details:
---[ RESPONSE ]--------------------------------------
HTTP/1.1 301 Moved Permanently
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Wed, 16 Feb 2022 19:05:48 GMT
Location: https://amazonaws.com/badhttpredirectlocation
Server: AmazonS3
X-Amz-Bucket-Region: us-east-1
X-Amz-Id-2: Kp4pc43tGnwA9FyXXwOJWyjobsMWHW7ym5tffTZaq5P31DVbBS8g7fg2FtGUMlq8nrFB5S6ym0M=
X-Amz-Request-Id: WPQX4FSBKQMXZP0K
-----------------------------------------------------: timestamp=2022-02-16T14:05:48.223-0500
2022-02-16T14:05:48.223-0500 [DEBUG] provider.terraform-provider-aws_v4.1.0_x5:
[aws-sdk-go] <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>PermanentRedirect</Code>
<Message>The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.</Message>
<Endpoint>s3.amazonaws.com</Endpoint>
<Bucket>terraform</Bucket><RequestId>WPQX4FSBKQMXZP0K</RequestId><HostId>Kp4pc43tGnwA9FyXXwOJWyjobsMWHW7ym5tffTZaq5P31DVbBS8g7fg2FtGUMlq8nrFB5S6ym0M=</HostId></Error>: timestamp=2022-02-16T14:05:48.223-0500
2022-02-16T14:05:48.223-0500 [DEBUG] provider.terraform-provider-aws_v4.1.0_x5:
[aws-sdk-go] DEBUG: Validate Response s3/GetBucketAcl failed, attempt 0/25,
error BucketRegionError: incorrect region, the bucket is not in 'ca-central-1' region at endpoint '', bucket is in 'us-east-1' region
@halostatue Is it possible this bucket is old? Can you create a brand new bucket and attempt to import it? The comment above gives you configuration to test out a new bucket.
It’s old, but not that old. This was created a few years ago when I was still using Terraform 0.8. I recently went through a fairly painful upgrade to Terraform 1.1.5, but the only thing I have not in ca-central-1
is a cert.
Also, just to make sure there aren't differences between the AWS Console and CLI, can you run this command on the bucket? … According to the debug log, AWS seems to think your bucket moved to
us-east-1
: …
There’s something wrong with the discovery, then:
$ AWS_PROFILE=terraformer aws s3api get-bucket-location --bucket halostatue-infrastructure
{
"LocationConstraint": "ca-central-1"
}
It will take me some time to run a new bucket test as I plan on running it using the same module that I’m trying to upgrade to see if that’s the cause. You can see the module described here https://github.com/halostatue/terraform-modules/tree/main/aws/content-site, with the in-flight changes to try to support the painfully breaking changes introduced with version 4.0 of this provider here: https://github.com/halostatue/terraform-modules/tree/support-terraform-aws-v4/aws/content-site.
I am creating a debug instance with the following:
module "debug_tfstate_bucket" {
source = "github.com/halostatue/terraform-modules//aws/s3-tfstate-bucket?ref=v3.x"
bucket = "debug-halostatue-infrastructure"
user = aws_iam_user.terraformer.name
}
(For various reasons, I have pushed new versions under the v3.x
tag which no longer require or accept the aws-region
and aws-profile
. This simply inherits the aws
provider already configured.)
After creating the bucket, I switch to the support-terraform-aws-v4
branch in my local and update my debug definition to:
module "debug_tfstate_bucket" {
source = "github.com/halostatue/terraform-modules//aws/s3-tfstate-bucket?ref=support-terraform-aws-v4"
bucket = "debug-halostatue-infrastructure"
user = aws_iam_user.terraformer.name
}
And I get the results I have been getting all along:
$ terraform import module.debug_tfstate_bucket.aws_s3_bucket_acl.terraform terraform,private
module.debug_tfstate_bucket.aws_s3_bucket_acl.terraform: Importing from ID "terraform,private"...
module.debug_tfstate_bucket.aws_s3_bucket_acl.terraform: Import prepared!
Prepared aws_s3_bucket_acl for import
module.debug_tfstate_bucket.aws_s3_bucket_acl.terraform: Refreshing state... [id=terraform,private]
â•·
│ Error: error getting S3 bucket ACL (terraform,private): BucketRegionError: incorrect region, the bucket is not in 'ca-central-1' region at endpoint '', bucket is in 'us-east-1' region
│ status code: 301, request id: YQT52TRQYJWQ30NY, host id: ilrPwWc+FDkTinnU4ouC3TchXkDAebb0eg+r9RRH6nGT3Nx59t1bslRWNiWDWczYah1+bA7jkYw=
│
│
╵
$ AWS_PROFILE=terraformer aws s3api get-bucket-location --bucket debug-halostatue-infrastructure
{
"LocationConstraint": "ca-central-1"
}
I don’t know what to do with this beyond what I have recorded.
Location
header. (Currently, the API returns no Location
header and v2 fills in Location: https://amazonaws.com/badhttpredirection
.)Location
header. S3 does return X-Amz-Bucket-Region: ap-northeast-2
header but this seems to be the incorrect location.Although this error seems to be new to you, it has been around for a while with #14544. Sometimes the problem was fixed by rm -rf .terraform
and re-initializing, terraform init
. Other times it was not.
Unfortunately, we don't have a clear smoking gun and clear path forward. However, we have some ideas that may or may not help.
The AWS provider v4.2 will include HTTP client and transport changes. However, the change will not be dramatic so this is maybe only be a 50/50 or less chance of helping.
As mentioned above, others have found that deleting .terraform
directory fixed the problem. The op mentions that this did not fix the problem for him.
It's possible that you can use a config workaround to avoid the 301 response. This is basically saying, "Okay, AWS. I'll play your silly game. The bucket is in X region." You would only add the provider
argument to the problematic resource.
provider "aws" {
alias = "s3-region"
region = "ap-northeast-2" # the "incorrect" region mentioned in the error
}
resource "aws_s3_bucket_acl "example" {
provider = aws.s3-region
# etc.
}
Unfortunately, since we have not been able to reproduce the problem, we cannot test this idea.
This may be the only true solution even if it is not very satisfying. We recommend that you reach out to AWS Support and raise the problem for the specific bucket. Although this worked before, we have found many times that things accidentally worked before that "break" as we upgrade different components. But, they really shouldn't have worked before.
Location
header. We cannot change that response.Location: https://amazonaws.com/badhttpredirectlocation
. aws s3api get-bucket-location --bucket yourbucket
) gives one result but the API itself is saying that the bucket is in a different region with the X-Amz-Bucket-Region
HTTP response header.Raise these specific issues with AWS Support and see if they can adjust something with the bucket to fix the problem. S3 used to work very differently, and it is possible that through the various upgrades and migrations, some buckets were missed.
Hey guys - this is also happening with me and my bucket is "only" 10 months old. The bucket region is eu-west-2
and I am getting exactly the same error as the op. Let me know if I can help testing this any further.
Error: error getting S3 bucket ACL (deeplink,public-read): BucketRegionError: incorrect region, the bucket is not in 'eu-west-2' region at endpoint '', bucket is in 'ap-northeast-1' region
│ status code: 301, request id: xxxx, host id: xxxx
I figured out my error, and it’s an ID10T error.
I’ve made a gist that encapsulates a full test suite on this that can be used by the use of terraform.tfvars
(gitignored in the gist). https://gist.github.com/halostatue/cf1ec2a93a455815813ac51775b13da4
The main point in the Makefile
can be shown that I was doing the import incorrectly (target import-wrong
):
import-wrong:
terraform import \
module.debug_tfstate_bucket.aws_s3_bucket_acl.terraform \
terraform,private || true
When I imported correctly, everything started working (target import-right
):
import-right:
terraform import \
module.debug_tfstate_bucket.aws_s3_bucket_acl.terraform \
debug-terraform-bucket-halostatue,private
The error that I am seeing is definitely user error; it may not be the same issue as @korporationcl. I think that there’s still a bug in that we should be getting a "no such bucket" sort of error instead of a "bad region" error.
I also think that there’s documentation improvement that could be done, because the example on the upgrade documentation for imports uses bucket = "bucket"
, which means that terraform import aws_s3_bucket_acl.bucket bucket, private
makes sense, but if the documentation used bucket = "example-bucket"
, then it would be clearer that the resource name is not the same.
I know this from the few times that I have done resource imports…but dealing with imports properly was the last thing on my mind as I was having to deal with an unplanned major upgrade when moving to AWS 4.x.
Sorry for the wild goose chase on my end, but I do think there are bugs here…just not what I was seeing.
@halostatue Thank you for your time with this and letting us know what happened regardless!! That's super helpful. That is a very unhelpful error message. And, yes, I'll update the documentation.
For me it was a naming problem so this saved me: đź‘Ť
~> NOTE: When importing into aws_s3_bucket_acl, make sure you use the S3 bucket name (e.g., yournamehere in the example above) as part of the ID, and not the Terraform bucket configuration name (e.g., example in the example above).
Thanks everyone!
I had a similar problem: Error: error waiting for S3 Bucket Versioning status for bucket (mybucket): BucketRegionError: incorrect region, the bucket is not in 'eu-west-1' region at endpoint '' status code: 301, request id: , host id:
when checking eventually my state file - i saw a mistake at the state file . there was a wrong dependency call to a bucket in other region :(. fixing the state by removing the resource and importing it correctly ,solved it .
Cause
- Part of the Terraform AWS Provider v4 is moving toward using AWS SDK for Go v2. That includes starting the process of switching HTTP client and transport. v4 includes using the v2 transport with the v1 client. The v2 transport may work differently when it gets a 301 without a
Location
header. (Currently, the API returns noLocation
header and v2 fills inLocation: https://amazonaws.com/badhttpredirection
.)- S3 is the only service known to return 301 without a
Location
header. S3 does returnX-Amz-Bucket-Region: ap-northeast-2
header but this seems to be the incorrect location.Although this error seems to be new to you, it has been around for a while with #14544. Sometimes the problem was fixed by
rm -rf .terraform
and re-initializing,terraform init
. Other times it was not.Solutions
Unfortunately, we don't have a clear smoking gun and clear path forward. However, we have some ideas that may or may not help.
Idea 1
The AWS provider v4.2 will include HTTP client and transport changes. However, the change will not be dramatic so this is maybe only be a 50/50 or less chance of helping.
Idea 2
As mentioned above, others have found that deleting
.terraform
directory fixed the problem. The op mentions that this did not fix the problem for him.Idea 3
It's possible that you can use a config workaround to avoid the 301 response. This is basically saying, "Okay, AWS. I'll play your silly game. The bucket is in X region." You would only add the
provider
argument to the problematic resource.provider "aws" { alias = "s3-region" region = "ap-northeast-2" # the "incorrect" region mentioned in the error } resource "aws_s3_bucket_acl "example" { provider = aws.s3-region # etc. }
Unfortunately, since we have not been able to reproduce the problem, we cannot test this idea.
Idea 4
This may be the only true solution even if it is not very satisfying. We recommend that you reach out to AWS Support and raise the problem for the specific bucket. Although this worked before, we have found many times that things accidentally worked before that "break" as we upgrade different components. But, they really shouldn't have worked before.
- AWS S3 should not be responding without including a
Location
header. We cannot change that response.- AWS SDK for Go v2 should not be filling in:
Location: https://amazonaws.com/badhttpredirectlocation
.- AWS S3 is giving inconsistent information for your buckets. The CLI (
aws s3api get-bucket-location --bucket yourbucket
) gives one result but the API itself is saying that the bucket is in a different region with theX-Amz-Bucket-Region
HTTP response header.Raise these specific issues with AWS Support and see if they can adjust something with the bucket to fix the problem. S3 used to work very differently, and it is possible that through the various upgrades and migrations, some buckets were missed.
Idea 3 worked for me!
We just upgraded our provider version to v4 and are encountering this issue. In v3 or earlier, there were no issues. We have always had 2 aws providers configured in one particular workspace:
provider "aws" {
region = "us-west-2"
shared_credentials_files = ["/path/to/.aws/credentials"]
profile = "default"
}
provider "aws" {
region = "us-west-1"
alias = "us-west-1"
shared_credentials_files = ["/path/to/.aws/credentials"]
profile = "default"
}
Almost all our resources use region us-west-2
by default, but one of our buckets uses us-west-1
, so we state the provider in that resource:
resource "aws_s3_bucket" "custom_bucket_name" {
provider = aws.us-west-1
bucket = "custom-bucket-name"
force_destroy = false
}
resource "aws_s3_bucket_acl" "custom_bucket_name" {
bucket = aws_s3_bucket.custom_bucket_name.id
acl = "private"
}
resource "aws_s3_bucket_versioning" "custom_bucket_name" {
bucket = aws_s3_bucket.custom_bucket_name.id
versioning_configuration {
mfa_delete = "Disabled"
status = "Suspended"
}
}
We attempted "Idea 2" (removing the .terraform
directory and running init
) and "Idea 3" by pointing to the provider alias (which is what we always had in v3 and lower and worked).
After the provider upgrade, this is the error we encountered:
Error: error getting S3 bucket versioning (custom-bucket-name): BucketRegionError: incorrect region, the bucket is not in 'us-west-2' region at endpoint ''
status code: 301, request id: , host id:
with aws_s3_bucket_versioning.custom_bucket_name,
on s3_custom_bucket_name.tf line 13, in resource "aws_s3_bucket_versioning" "custom_bucket_name":
13: resource "aws_s3_bucket_versioning" "custom_bucket_name" {
Removing the resource from state and re-importing did not resolve the issue. It seems the aws_s3_bucket_acl
resource was also removed when I removed the aws_s3_bucket_versioning
resource. An attempt to re-import the ACL resource failed:
Error: error getting S3 bucket ACL (custom-bucket-name,999999999999,private): BucketRegionError: incorrect region, the bucket is not in 'us-west-2' region at endpoint ''
│ status code: 301, request id: , host id:
Finally, I attempted to remove and reimport the bucket resource itself aws_s3_bucket
, which includes the reference to the non-default alias provider. Although the bucket resource imported fine, the other 2 resources aws_s3_bucket_versioning
and aws_s3_bucket_acl
still have errors with region.
After some testing, it appears adding provider = aws.us-west-1
to each new resource solved my issue.
resource "aws_s3_bucket_acl" "custom_bucket_name" {
provider = aws.us-west-1
bucket = aws_s3_bucket.custom_bucket_name.id
acl = "private"
}
resource "aws_s3_bucket_versioning" "custom_bucket_name" {
provider = aws.us-west-1
bucket = aws_s3_bucket.custom_bucket_name.id
versioning_configuration {
mfa_delete = "Disabled"
status = "Suspended"
}
}
Idea 3
It's possible that you can use a config workaround to avoid the 301 response. This is basically saying, "Okay, AWS. I'll play your silly game. The bucket is in X region." You would only add the
provider
argument to the problematic resource.provider "aws" { alias = "s3-region" region = "ap-northeast-2" # the "incorrect" region mentioned in the error } resource "aws_s3_bucket_acl "example" { provider = aws.s3-region # etc. }
Unfortunately, since we have not been able to reproduce the problem, we cannot test this idea.
I can confirm that this workaround worked for me. However it's quite inconvenient in some cases. Those who use "for_each" in the resources have to use one provider for all resources (providers can't be specified dynamically). This means we'd have to split one resource with for_each into multiple resources, one per region/provider.
Idea 4
This may be the only true solution even if it is not very satisfying. We recommend that you reach out to AWS Support and raise the problem for the specific bucket. Although this worked before, we have found many times that things accidentally worked before that "break" as we upgrade different components. But, they really shouldn't have worked before.
1. AWS S3 should not be responding without including a `Location` header. We cannot change that response. 2. AWS SDK for Go v2 should not be filling in: `Location: https://amazonaws.com/badhttpredirectlocation`. 3. AWS S3 is giving inconsistent information for your buckets. The CLI (`aws s3api get-bucket-location --bucket yourbucket`) gives one result but the API itself is saying that the bucket is in a different region with the `X-Amz-Bucket-Region` HTTP response header.
Raise these specific issues with AWS Support and see if they can adjust something with the bucket to fix the problem. S3 used to work very differently, and it is possible that through the various upgrades and migrations, some buckets were missed.
I contacted AWS support but they insist the API behaviour is like this by design (even though they clearly send malformed 301 responses):
The location header is not included on the response from S3, and this is by design from the API.
so I think realistically this might never be fixed on AWS side.
I'd like to propose another solution. Could we add optional region
argument to the aws_s3_bucket
resource? By default it could use region from the provider but when specified that could be passed to SDK and correct URL be used to retrieve bucket details. Is that something that could work?
I also encountered that issue when using the data_source
form s3_bucket. All my resources where in eu-west-1 but the bucket is was referencing in eu-central-1
Did not find a way around, but since I had the bucket name and only needed that i just used the raw sting of the bucket name.
As we were playing around with the initial setup, we encountered this issue with the ElasticBeanstalk source. Admittedly, we were making so many tests I am not sure what happened but at one moment we changed the name of the key for the beanstalk and that is when it still said the old name, which is the key that is stored in the state.
Error:
reading S3 Object (dist/ca-central-1/Backend/prod/dist-016a3135fde1.zip): BucketRegionError: incorrect region, the bucket is not in 'ca-central-1' region at endpoint '', bucket is in 'us-east-1' region
with module.bucket.aws_s3_object.dist_item,
│ on bucket/main.tf line 25, in resource "aws_s3_object" "dist_item":
│ 25: resource "aws_s3_object" "dist_item" {
We always thought it was at the creation of the file/bucket but it turns out that it must be at the moment of replacing the original file.
This is our updated resource which still gives the error above (see that the dist_item file is now different but still references the old one):
resource "aws_s3_object" "dist_item" {
depends_on = [aws_s3_bucket_versioning.bucket_versioning, aws_s3_bucket.bucket]
key = "dist-${local.identifier}.zip"
bucket = aws_s3_bucket.bucket.bucket
source = local.dist_file
}
Now this is the state (shortened):
"attributes": {
"bucket": "xxxxxx-devops",
"id": "dist/ca-central-1/Backend/prod/dist-016a3135fde1.zip",
"key": "dist/ca-central-1/Backend/prod/dist-016a3135fde1.zip",
},
Now THAT bucket (xxxxx-devops) is the one from the original deployment (which is indeed in us-east-1 as we wanted to have just one bucket for everything) and not the new one which has the right creation of the new dist bucket in the right region. So it seems the error is misleading a bit. This seems to be at the moment of deleting the initial zip however still brings terraform into a deadlock because it cannot delete the resources in order to create the new one.
A terraform destroy gives the same error as above.
To fixed it, we removed the item from state:
terraform state rm --dry-run "module.bucket.aws_s3_object.dist_item"
And then plan and apply worked again. Not sure if this is a bug or a very unfortunate state but hey, just posting here in case it helps.
Community Note
Terraform CLI and Terraform AWS Provider Version
Affected Resource(s)
Terraform Configuration Files
Debug Output
Expected Behavior
I migrated the code to use the new aws_s3_bucket_acl resource (since I previously had the acl="private" parameter set but this fails with the 1.1.x release), according to the documentation from here, however when I import the resource into Terraform, for some reason fails and tells me the bucket is on a different region (which is not since I confirmed the bucket has been created on the region 'ap-southeast-2', not in Korea. Having said that, everything was working well until someone upgraded to the latest release.
Importing the resource should work.
Actual Behavior
It does mention an endpoint that is not used and doesn't add the resource to the state file.
Steps to Reproduce
terraform init --upgrade
terraform import aws_s3_bucket_acl.example example,private
References
14544
23248