Closed rifelpet closed 3 years ago
It may be possible to define an aws_s3_bucket datasource that looks up the state store, and pass its region to the second provider. We could do this unconditionally. That may make things simpler.
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/s3_bucket
data "aws_s3_bucket" "kops_state_store" {
bucket = "testingBucket"
}
provider "aws" {
region = data.aws_s3_bucket.kops_state_store.region
alias = "state_store"
}
resource "aws_s3_bucket_object" "cluster-completed-spec" {
provider = aws.state_store
bucket = "testingBucket"
content = file("${path.module}/data/aws_s3_bucket_object_cluster-completed.spec_content")
key = "clusters.example.com/complex.example.com/cluster-completed.spec"
server_side_encryption = "AES256"
}
We may want to consider this a higher priority (possibly even blocker) for 1.22 given that the TerraformManagedFiles feature flag defaults to true, so anyone using buckets in a different region will suddenly encounter terraform apply
errors after upgrading kops and will have to consult the release notes and disable the feature flag until we get this fixed.
/close fixed by https://github.com/kubernetes/kops/issues/12092 and will be in kops 1.20
/kind bug The terraform prow job revealed a bug in the recent change to define s3 objects via terraform:
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/e2e-kops-grid-scenario-terraform/1422048227233894400
All kops AWS prow jobs use the same s3 bucket, yet each job invocation randomly picks regions and zones to deploy the cluster in. With the terraform output the aws_s3_bucket_object resources would need to use a provider configured for the bucket's region rather than the cluster's region in cases in which they are different.
If the bucket's region is known prior to task execution, we could conditionally define a second aliased aws provider configured to use the bucket's region and configure the aws_s3_bucket_object resources to use the second provider.