Closed pquerna closed 9 years ago
It doesn't need to be hacky.
You can implement this by using maps holding different AWS access/secret keys per region as you see fit.
Check http://www.terraform.io/intro/getting-started/variables.html for mapping and http://www.terraform.io/docs/configuration/interpolation.html#lookup_map__key_ for the variable interpolation and lookup
@pmoust: But you can only have a single version of the AWS provider active at a time -- the region is passed in there -- so I'm not sure how maps to hold the different credentials for different regions make a difference?
@pquerna I see, you are right, atm it is not possible to 're-instantiate' a provider. Valid point and is needed at times, i.e. for organisations that need to have completely separated accounts for Ops live/staging. Or your case of course.
+1
+1 to the multi-region use case
I haven't applied this state yet, but the terraform plan
below seems to indicate that the multi-region use case is supported through the use of modules.
module "ap-northeast-1" {
source = "./shared"
region = "ap-northeast-1"
servers = "1"
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
}
module "us-east-1" {
source = "./shared"
region = "us-east-1"
servers = "1"
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
}
+ module.us-east-1.aws_instance.ipfs
ami: "" => "ami-12f27d7a"
availability_zone: "" => "<computed>"
instance_type: "" => "t2.micro"
key_name: "" => "btc"
private_dns: "" => "<computed>"
private_ip: "" => "<computed>"
public_dns: "" => "<computed>"
public_ip: "" => "<computed>"
security_groups.#: "" => "1"
security_groups.0: "" => "sg-6c849109"
subnet_id: "" => "subnet-932ac3b8"
tags.Name: "" => "us-east-1-ipfs-0"
+ module.ap-northeast-1.aws_instance.ipfs
ami: "" => "ami-77487776"
availability_zone: "" => "<computed>"
instance_type: "" => "t2.micro"
key_name: "" => "btc"
private_dns: "" => "<computed>"
private_ip: "" => "<computed>"
public_dns: "" => "<computed>"
public_ip: "" => "<computed>"
security_groups.#: "" => "1"
security_groups.0: "" => "sg-6c849109"
subnet_id: "" => "subnet-932ac3b8"
tags.Name: "" => "ap-northeast-1-ipfs-0"
+1
This is a really huge issue for my use case at Leaf and the module system is a non-starter for us. Maybe a default_region
and a mapping of regions to credentials?
@briantigerchow does your example above still work for you? What does the module def look like?
I can't get provider "aws"
to work inside a module.
@ctro did you get this working?
@outrunthewolf well, yeah.
I ended up just using the CLI option -state=path
(https://terraform.io/docs/commands/apply.html).
This way I can save one state-file per region.
Thanks
:+1: for this
I'm interested in working on this, and while the original suggestion is pretty clever, I noticed that the DigitalOcean provider allows the "region" setting per-instance, so maybe it would be useful to support that with AWS as well for consistency.
The original suggestion of creating multiple copies of the AWS provider should require less code changes, although I think the per-instance "region" setting might be nicer in the longer-term.
I'd appreciate any feedback on the different approaches if there's some tradeoffs I've overlooked.
The actual issue comes from the provider requiring a region. Its quite easy to use modules to do multi-region setups, but they fail when they check for specific AMI's based on the provider region.
@outrunthewolf The AWS provider requires a region because each region has its own API endpoints, so you can't connect to anything until you have decided what region to use (unlike DigitalOcean, which takes the region as a parameter to each API call). You could move the region into the resources, and then have them connect as necessary. Then what if you have multiple accounts? You could move the access and secret keys into the resources also, but then why have provider configuration at all?
I think the problem is that provider configurations are singletons, which is bad because they actually aren't. And incidentally, the workaround of having a module-scoped provider blocks seems to no longer work on master -- I get errors about not providing required fields even though they are provided when I try. I'm not really sure what behavior to expect since the intersection of modules and provider configuration doesn't seem to be covered in the docs at all.
@bitglue yeah, I am working on removing that singleton restriction based on some feedback from folks at Hashicorp. PR #1281 above isn't quite complete, but has a proof-of-concept allowing multiple providers defined in the same config.
:+1: for this
Done for 0.5
Is this the documentation for this?
https://www.terraform.io/docs/configuration/providers.html
Could you advise me on how the syntax for a module would work?
@outrunthewolf correct, that is the documentation.
You could specify the provider inside of your module and then reference that provider in your resources.
Or do you mean that you want to inject a provider? In that case you would inject the name of the provider and use that name in your resources.
So if i'm using AWS and I want to have a single provider and multiple regions. Excuse my stupidity here, any chance you could give me some physical (Code) examples, I can't quite visualise the approaches you've explained
@outrunthewolf no worries!
Example of injecting a provider alias into a region:
main.tf
provider "aws" {
access_key = "foo"
secret_key = "bar"
region = "us-east-1"
alias = "east"
}
provider "aws" {
access_key = "foo"
secret_key = "bar"
region = "us-west-1"
alias = "west"
}
module "some module" {
source = "your_module"
provider_alias = "east"
}
module "some module again" {
source = "your_module"
provider_alias = "west"
}
your_module/main.tf
variable "provider_alias" {}
resource "aws_instance" "foo" {
provider = "aws.${var.provider_alias}"
# ...
}
Let me know if that clears things up!
Hmm this setup is kind of weak, if you change the name of a module to support multiple regions, it tries to recreate everything due to this name change. Modules are not very modular at all if you can't shuffle things around, in and out of modules etc. You're effectively locked into your initial configuration.
Also appears to be broken, unless I'm missing something. I have the same setup as @jszwedko mentioned, and each resource has:
provider = "aws.${var.provider_alias}"
but I get dozens of:
... resource depends on non-configured provider 'aws.${var.provider_alias}'
seems like the interpolation stuff is failing with v0.6.14
Let's say we want to deploy our app into 2 different regions in 3 different accounts. Is there any clean way of doing this without duplicating templates? Terraform doesn't seem to support nested maps, which would probably help:
variable "vpc-id" {
type = "map"
default = {
test = {
us-west-2 = "vpc-123654"
us-east-1 = "vpc-789654"
}
prod = ...
}
Actually, I think I found a decent way to achieve what I wanted. I created a separate tfvars
file for each region, and deploying the same configuration to a different region is now as easy as just specifying -var-file=./us-west-2.tfvars
, for example. The state for each region-account combination is saved in a separate state file.
I read through all of the comments on this issue, and I'm not clear what the current status is. Can someone update with the current recommended best practice for deploying resources in multiple AWS regions, whether in the same account or not, in the same .tf file, or is this still not possible?
@freimer FWIW I ended up just having 5 separate terraform states, one per region and a little bash script. Hacky but it kind of works. As far as I know the provider support is still broken
Hmm, when I try the provider alias as described in the docs I get this:
provider = "aws.s3"
I'll try the module approach, but if this should work as documented, or the documentation pulled it the code is not functional yet. Or course I could have something wrong in my .tf file, but it is a quite simple setup.Yea, this doesn't work with modules either. I guess I'll go the different sub-directories with completely separate terraform files in each with a wrapper script.
@freimer based on the error it sounds like an S3 bucket with the same name already exists in the "us-west-2" region. Depending on what you're trying to do you may need to either pick a new name for the bucket, or delete the one in "us-west-2" before creating this one in "us-east-1".
@freimer , I guess the best approach would depend on your use case, I'll just describe what we do.
We're using Terraform to create "stacks" (we call them sandboxes) in multiple regions. Each sandbox contains an ELB and several autoscaling groups (+ associated launch configurations). This is essentially our unit of deployment to get the latest version of our code into stage or production. It assumes that VPCs, IAM roles, and S3 buckets are already created, we did this using either Terraform or Cloudformation. This underlying infrastructure doesn't change too often, so it's outside of our normal deployment scope. Terraform state is saved in a region-specific S3 bucket. This is how we manage the configuration:
config.tf
file, which declares all the parameters that are needed for a deployment. Some of them are also defined there - for example, instance type or desired instance count is the same for all regions, so we define them there. Most of them are empty, however.regions
folder, which keeps a bunch of region-specific tfvars
files. They are called the same as the region itself, for example us-west-2.tfvars
.config.tf
:variable "vpc-id" {
type = "map"
default = {}
}
And then define it in "us-west-2.tfvars":
vpc-id.test = "vpc-1234"
vpc-id.stage = "vpc-2345"
vpc-id.prod = "vpc-9897"
When we need to find this VPC id in an actual template, we do it like this:
vpc-id = "${lookup(var.vpc-id, var.aws-env)}"
This looks for an aws-env
key (which can be test
, stage
or prod
and is specified from command line at deploy time) in a map of vpc-ids.
Terraform is always invoked from a wrapper script. There are several steps (note that I'm using Powershell here, so the syntax is different from bash):
Nuke .terraform
folder. We don't need the previous state file screwing up our deployment. I played with naming them differently, didn't work for me.
$cachedConfig = Join-Path (Convert-Path .) "\.terraform"
if (Test-Path $cachedConfig) {
Write-Verbose "Removing cached config $cachedConfig..."
Remove-Item $cachedConfig -Recurse -Force
}
This is equivalent to rm -rf ./.terraform/
.
Set remote state:
terraform remote config `
-backend=s3 `
-backend-config="bucket=$s3Bucket" `
-backend-config="key=$fullKey" `
-backend-config="region=$region" `
-backend-config="encrypt=true" `
-backend-config="acl=bucket-owner-full-control"
Plan the changes:
# This is mildly awkward, but Terraform doesn't seem
# to expand the file path properly if I pass it directly
$command = "terraform plan "
$command += "-var 'aws-env=$environment' "
$command += "-var 'sandbox=$sandbox' "
if ($regionConfig) {
$command += "-var-file=$regionConfig "
}
$command += "-no-color "
# Use tee to capture output from Terraform and log it, otherwise it goes to the console only
Invoke-Expression $command | Tee-Object -file "$env:TEMP/temp.tfplan"
test
) and a sandbox name (this is used for tagging).This may seem somewhat complicated and hacky, but it works pretty well for us. I should probably write a blog post or something, looks like it could be interesting for others.
@mgood no, that is not the case. The buckets don't exist in either east or west. Well, they do now because I broke out the S3 resources to a completely separate template, and now it works fine. I think what is happening is that the bucket constraint is being set in the data of the request, but the AWS S3 API endpoint for another region is being used. See the example from the AWS docs:
PUT / HTTP/1.1
Host: BucketName.s3.amazonaws.com
Content-Length: length
Date: date
Authorization: authorization string (see Authenticating Requests (AWS Signature Version
4))
<CreateBucketConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<LocationConstraint>BucketRegion</LocationConstraint>
</CreateBucketConfiguration>
The API endpoints for non- US Standard (us-east-1) for S3 are different...
This may be an S3 only issue.
@endofcake Thank you very much! This looks like it will indeed be very useful when we get to that point. We have been using CloudFormation, which works well, but obviously has issues with multi-region or even multi-cloud ("templates" with resources in AWS and Azure, for instance). So what I'm working on right now is the "infrastructure" as far as the VPCs themselves, subnets, firewalls, and S3 buckets. It appears there are some issues specifically with S3 and multi-region Terraform templates. I'll check out that with a non-S3 resource. However, your post will come in very useful when we get to converting our application templates.
Thanks!
@tj Thanks for the advice. I've done the same but there is still problems when I wish to manage "global" resources (like iam roles). if I keep several tfstate files when you try to apply it the second time I get an error that the resource already exists. I had to separate the "global" resources to a "global.tf" file with its own statefile.
@Ohadbasan, yep, and it makes perfect sense to separate global resources into a separate state (you'd have the same problem with CloudFormation). If you need to share some resource references between separate stacks, I believe you could use modules for this (haven't tried this myself though).
Alternatively, it is possible to keep both global and regional resources in the same statefile, but it'd mean that you essentially have to duplicate regional resources and use aliases to reference the region. E.g.:
provider "aws" {
# Default to Oregon
region = "us-west-2"
}
provider "aws" {
alias = "oregon"
region = "us-west-2"
}
provider "aws" {
alias = "virginia"
region = "us-east-1"
}
# Oregon
resource "aws_route" "elb_peering_oregon" {
provider = "aws.oregon"
*****
}
# Virginia
resource "aws_route" "elb_peering_virginia" {
provider = "aws.virginia"
*****
}
Not ideal definitely.
I worked up an example with multiple regions in a pure (no hacks) tf app: https://github.com/kung-foo/multiregion-terraform. Everything seems to work with all 11 regions and a single state file.
FYI, if I recall correctly, the issue I had with multi-region and S3 is that it takes a while for a bucket with a particular name to be deleted completely. So if you had a "bucket1" in us-east-1 and needed to "move" it to another region, you can't just delete it and then immediately re-create it in the other region. I don't believe there are any issues with Terraform and multi-regions. It is more of an AWS issue with timing.
Ah, yeah, I've had issues likes that in the past (pre-tf) with buckets. In my case though, I wanted to simply deploy a stack to multiple regions, and hadn't been able to find a working example. Then I got distracted by the idea of deploying in every possible az with a single tf apply.
this is quite old as a post, but looking for some guidance, is there any way to reference another region's data in terraform to apply in another region. Use case is AWS Cloudfront which expects ACM to be present in us-east-1 while i need to apply in eu-west-1 ?
@hugoduncan I've the same use case and the same problem.
@hugomatinho @rimiti See here for more information: Create a new provider with an alias, just for Cloudfront/ACM:
provider "aws" {
alias = "useast1"
region = "us-east-1"
}
and in your ACM resource use the provider meta-argument:
resource "aws_acm_certificate" "cert" {
provider = aws.useast1
...
}
or for modules:
module "acm" {
source = "terraform-aws-modules/acm/aws"
providers = {
aws = "aws.useast1"
}
...
}
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Didn't see any examples of this yet, and for a hack I'm working on I wanted instances in multiple regions. I played with having many different terraform directories... but it'd be nice to have just one.
Multiple regions with the same provider is the easy example, but the problem also applies to having multiple accounts for the same provider.
A "quick" hack that I was thinking about was basically if you changed Provider's ResourcesMap to have a dynamic prefix based on whatever the provider is called by the user. Eg, if you changed:
"aws_instance"
toproviderName + "_instance"
around here:https://github.com/hashicorp/terraform/blob/master/builtin/providers/aws/provider.go#L44-L52
An example of how this would look to the user:
Thoughts?