hashicorp / terraform

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is a source-available tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.
https://www.terraform.io/
Other
42.74k stars 9.56k forks source link

Multiple regions with same Provider #451

Closed pquerna closed 9 years ago

pquerna commented 10 years ago

Didn't see any examples of this yet, and for a hack I'm working on I wanted instances in multiple regions. I played with having many different terraform directories... but it'd be nice to have just one.

Multiple regions with the same provider is the easy example, but the problem also applies to having multiple accounts for the same provider.

A "quick" hack that I was thinking about was basically if you changed Provider's ResourcesMap to have a dynamic prefix based on whatever the provider is called by the user. Eg, if you changed: "aws_instance" to providerName + "_instance" around here:

https://github.com/hashicorp/terraform/blob/master/builtin/providers/aws/provider.go#L44-L52

An example of how this would look to the user:

providers {
    awseast = "terraform-provider-aws"  
    awswest = "terraform-provider-aws"  
}

provider "awseast" {
    region = "us-east-1"
}

provider "awswest" {
    region = "us-west-2"
}

resource "awswest_instance" "example-west" {
    ami = "ami-f52c63c5"
    instance_type = "t2.micro"
    key_name = "pquerna"
}

resource "awseast_instance" "example-east" {
    ami = "ami-d878c3b0"
    instance_type = "t2.micro"
    key_name = "pquerna"
}

Thoughts?

pmoust commented 10 years ago

It doesn't need to be hacky. You can implement this by using maps holding different AWS access/secret keys per region as you see fit. Check http://www.terraform.io/intro/getting-started/variables.html for mapping and http://www.terraform.io/docs/configuration/interpolation.html#lookup_map__key_ for the variable interpolation and lookup

pquerna commented 10 years ago

@pmoust: But you can only have a single version of the AWS provider active at a time -- the region is passed in there -- so I'm not sure how maps to hold the different credentials for different regions make a difference?

pmoust commented 10 years ago

@pquerna I see, you are right, atm it is not possible to 're-instantiate' a provider. Valid point and is needed at times, i.e. for organisations that need to have completely separated accounts for Ops live/staging. Or your case of course.

frntn commented 10 years ago

+1

btc commented 10 years ago

+1 to the multi-region use case

I haven't applied this state yet, but the terraform plan below seems to indicate that the multi-region use case is supported through the use of modules.

module "ap-northeast-1" {
    source = "./shared"

    region = "ap-northeast-1"
    servers = "1"
    access_key = "${var.access_key}"
    secret_key = "${var.secret_key}"
}

module "us-east-1" {
    source = "./shared"

    region = "us-east-1"
    servers = "1"
    access_key = "${var.access_key}"
    secret_key = "${var.secret_key}"
}
+ module.us-east-1.aws_instance.ipfs
    ami:               "" => "ami-12f27d7a"
    availability_zone: "" => "<computed>"
    instance_type:     "" => "t2.micro"
    key_name:          "" => "btc"
    private_dns:       "" => "<computed>"
    private_ip:        "" => "<computed>"
    public_dns:        "" => "<computed>"
    public_ip:         "" => "<computed>"
    security_groups.#: "" => "1"
    security_groups.0: "" => "sg-6c849109"
    subnet_id:         "" => "subnet-932ac3b8"
    tags.Name:         "" => "us-east-1-ipfs-0"

+ module.ap-northeast-1.aws_instance.ipfs
    ami:               "" => "ami-77487776"
    availability_zone: "" => "<computed>"
    instance_type:     "" => "t2.micro"
    key_name:          "" => "btc"
    private_dns:       "" => "<computed>"
    private_ip:        "" => "<computed>"
    public_dns:        "" => "<computed>"
    public_ip:         "" => "<computed>"
    security_groups.#: "" => "1"
    security_groups.0: "" => "sg-6c849109"
    subnet_id:         "" => "subnet-932ac3b8"
    tags.Name:         "" => "ap-northeast-1-ipfs-0"
eropple commented 9 years ago

+1

This is a really huge issue for my use case at Leaf and the module system is a non-starter for us. Maybe a default_region and a mapping of regions to credentials?

ctro commented 9 years ago

@briantigerchow does your example above still work for you? What does the module def look like?

I can't get provider "aws" to work inside a module.

outrunthewolf commented 9 years ago

@ctro did you get this working?

ctro commented 9 years ago

@outrunthewolf well, yeah.

I ended up just using the CLI option -state=path (https://terraform.io/docs/commands/apply.html). This way I can save one state-file per region.

outrunthewolf commented 9 years ago

Thanks

jszwedko commented 9 years ago

:+1: for this

mgood commented 9 years ago

I'm interested in working on this, and while the original suggestion is pretty clever, I noticed that the DigitalOcean provider allows the "region" setting per-instance, so maybe it would be useful to support that with AWS as well for consistency.

The original suggestion of creating multiple copies of the AWS provider should require less code changes, although I think the per-instance "region" setting might be nicer in the longer-term.

I'd appreciate any feedback on the different approaches if there's some tradeoffs I've overlooked.

outrunthewolf commented 9 years ago

The actual issue comes from the provider requiring a region. Its quite easy to use modules to do multi-region setups, but they fail when they check for specific AMI's based on the provider region.

bitglue commented 9 years ago

@outrunthewolf The AWS provider requires a region because each region has its own API endpoints, so you can't connect to anything until you have decided what region to use (unlike DigitalOcean, which takes the region as a parameter to each API call). You could move the region into the resources, and then have them connect as necessary. Then what if you have multiple accounts? You could move the access and secret keys into the resources also, but then why have provider configuration at all?

I think the problem is that provider configurations are singletons, which is bad because they actually aren't. And incidentally, the workaround of having a module-scoped provider blocks seems to no longer work on master -- I get errors about not providing required fields even though they are provided when I try. I'm not really sure what behavior to expect since the intersection of modules and provider configuration doesn't seem to be covered in the docs at all.

mgood commented 9 years ago

@bitglue yeah, I am working on removing that singleton restriction based on some feedback from folks at Hashicorp. PR #1281 above isn't quite complete, but has a proof-of-concept allowing multiple providers defined in the same config.

PaulCapestany commented 9 years ago

:+1: for this

mitchellh commented 9 years ago

Done for 0.5

outrunthewolf commented 9 years ago

Is this the documentation for this?

https://www.terraform.io/docs/configuration/providers.html

Could you advise me on how the syntax for a module would work?

jszwedko commented 9 years ago

@outrunthewolf correct, that is the documentation.

You could specify the provider inside of your module and then reference that provider in your resources.

Or do you mean that you want to inject a provider? In that case you would inject the name of the provider and use that name in your resources.

outrunthewolf commented 9 years ago

So if i'm using AWS and I want to have a single provider and multiple regions. Excuse my stupidity here, any chance you could give me some physical (Code) examples, I can't quite visualise the approaches you've explained

jszwedko commented 9 years ago

@outrunthewolf no worries!

Example of injecting a provider alias into a region:

main.tf

provider "aws" {
    access_key = "foo"
    secret_key = "bar"
    region = "us-east-1"
    alias = "east"
}

provider "aws" {
    access_key = "foo"
    secret_key = "bar"
    region = "us-west-1"
    alias = "west"
}

module "some module" {
    source = "your_module"
    provider_alias = "east"
}

module "some module again" {
    source = "your_module"
    provider_alias = "west"
}

your_module/main.tf

variable "provider_alias" {}

resource "aws_instance" "foo" {
    provider = "aws.${var.provider_alias}"

    # ...
}

Let me know if that clears things up!

tj commented 8 years ago

Hmm this setup is kind of weak, if you change the name of a module to support multiple regions, it tries to recreate everything due to this name change. Modules are not very modular at all if you can't shuffle things around, in and out of modules etc. You're effectively locked into your initial configuration.

tj commented 8 years ago

Also appears to be broken, unless I'm missing something. I have the same setup as @jszwedko mentioned, and each resource has:

provider = "aws.${var.provider_alias}"

but I get dozens of:

... resource depends on non-configured provider 'aws.${var.provider_alias}'

seems like the interpolation stuff is failing with v0.6.14

endofcake commented 8 years ago

Let's say we want to deploy our app into 2 different regions in 3 different accounts. Is there any clean way of doing this without duplicating templates? Terraform doesn't seem to support nested maps, which would probably help:

variable "vpc-id" {
  type = "map"
  default = {
    test = {
      us-west-2 = "vpc-123654"
      us-east-1 = "vpc-789654"
  }
  prod = ...
}
endofcake commented 8 years ago

Actually, I think I found a decent way to achieve what I wanted. I created a separate tfvars file for each region, and deploying the same configuration to a different region is now as easy as just specifying -var-file=./us-west-2.tfvars, for example. The state for each region-account combination is saved in a separate state file.

freimer commented 8 years ago

I read through all of the comments on this issue, and I'm not clear what the current status is. Can someone update with the current recommended best practice for deploying resources in multiple AWS regions, whether in the same account or not, in the same .tf file, or is this still not possible?

tj commented 8 years ago

@freimer FWIW I ended up just having 5 separate terraform states, one per region and a little bash script. Hacky but it kind of works. As far as I know the provider support is still broken

freimer commented 8 years ago

Hmm, when I try the provider alias as described in the docs I get this:

freimer commented 8 years ago

Yea, this doesn't work with modules either. I guess I'll go the different sub-directories with completely separate terraform files in each with a wrapper script.

mgood commented 8 years ago

@freimer based on the error it sounds like an S3 bucket with the same name already exists in the "us-west-2" region. Depending on what you're trying to do you may need to either pick a new name for the bucket, or delete the one in "us-west-2" before creating this one in "us-east-1".

endofcake commented 8 years ago

@freimer , I guess the best approach would depend on your use case, I'll just describe what we do.

We're using Terraform to create "stacks" (we call them sandboxes) in multiple regions. Each sandbox contains an ELB and several autoscaling groups (+ associated launch configurations). This is essentially our unit of deployment to get the latest version of our code into stage or production. It assumes that VPCs, IAM roles, and S3 buckets are already created, we did this using either Terraform or Cloudformation. This underlying infrastructure doesn't change too often, so it's outside of our normal deployment scope. Terraform state is saved in a region-specific S3 bucket. This is how we manage the configuration:

variable "vpc-id" {
  type = "map"
  default = {}
}

And then define it in "us-west-2.tfvars":

vpc-id.test    = "vpc-1234"
vpc-id.stage = "vpc-2345"
vpc-id.prod   = "vpc-9897"

When we need to find this VPC id in an actual template, we do it like this: vpc-id = "${lookup(var.vpc-id, var.aws-env)}" This looks for an aws-env key (which can be test, stage or prod and is specified from command line at deploy time) in a map of vpc-ids.

Terraform is always invoked from a wrapper script. There are several steps (note that I'm using Powershell here, so the syntax is different from bash):

  1. Nuke .terraform folder. We don't need the previous state file screwing up our deployment. I played with naming them differently, didn't work for me.

    $cachedConfig = Join-Path (Convert-Path .) "\.terraform"
    if (Test-Path $cachedConfig) {
     Write-Verbose "Removing cached config $cachedConfig..."
     Remove-Item $cachedConfig -Recurse -Force
    }

    This is equivalent to rm -rf ./.terraform/.

  2. Set remote state:

    terraform remote config `
    -backend=s3 `
    -backend-config="bucket=$s3Bucket" `
    -backend-config="key=$fullKey" `
    -backend-config="region=$region" `
    -backend-config="encrypt=true" `
    -backend-config="acl=bucket-owner-full-control"
  3. Plan the changes:

    # This is mildly awkward, but Terraform doesn't seem
    # to expand the file path properly if I pass it directly
    $command  = "terraform plan "
    $command += "-var 'aws-env=$environment' "
    $command += "-var 'sandbox=$sandbox' "
    if ($regionConfig) {
     $command += "-var-file=$regionConfig " 
    }
    $command += "-no-color "
    
    # Use tee to capture output from Terraform and log it, otherwise it goes to the console only
    Invoke-Expression $command | Tee-Object -file "$env:TEMP/temp.tfplan"
  4. And then apply the plan.
  5. In the wrapper script, this is done in sequence for all our regions. To invoke the script, we only need to pass 2 mandatory parameters - environment (such as test) and a sandbox name (this is used for tagging).
  6. After we get this new sandbox up and running, we can start switching traffic from the old one (we use blue-green deployments) by changing Route 53 records. We distribute traffic between regions using Route 53 as well.

This may seem somewhat complicated and hacky, but it works pretty well for us. I should probably write a blog post or something, looks like it could be interesting for others.

freimer commented 8 years ago

@mgood no, that is not the case. The buckets don't exist in either east or west. Well, they do now because I broke out the S3 resources to a completely separate template, and now it works fine. I think what is happening is that the bucket constraint is being set in the data of the request, but the AWS S3 API endpoint for another region is being used. See the example from the AWS docs:

PUT / HTTP/1.1
Host: BucketName.s3.amazonaws.com
Content-Length: length
Date: date
Authorization: authorization string (see Authenticating Requests (AWS Signature Version
        4))

<CreateBucketConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> 
  <LocationConstraint>BucketRegion</LocationConstraint> 
  </CreateBucketConfiguration>

The API endpoints for non- US Standard (us-east-1) for S3 are different...

This may be an S3 only issue.

freimer commented 8 years ago

@endofcake Thank you very much! This looks like it will indeed be very useful when we get to that point. We have been using CloudFormation, which works well, but obviously has issues with multi-region or even multi-cloud ("templates" with resources in AWS and Azure, for instance). So what I'm working on right now is the "infrastructure" as far as the VPCs themselves, subnets, firewalls, and S3 buckets. It appears there are some issues specifically with S3 and multi-region Terraform templates. I'll check out that with a non-S3 resource. However, your post will come in very useful when we get to converting our application templates.

Thanks!

Ohadbasan commented 8 years ago

@tj Thanks for the advice. I've done the same but there is still problems when I wish to manage "global" resources (like iam roles). if I keep several tfstate files when you try to apply it the second time I get an error that the resource already exists. I had to separate the "global" resources to a "global.tf" file with its own statefile.

endofcake commented 8 years ago

@Ohadbasan, yep, and it makes perfect sense to separate global resources into a separate state (you'd have the same problem with CloudFormation). If you need to share some resource references between separate stacks, I believe you could use modules for this (haven't tried this myself though).

Alternatively, it is possible to keep both global and regional resources in the same statefile, but it'd mean that you essentially have to duplicate regional resources and use aliases to reference the region. E.g.:

provider "aws" {
  # Default to Oregon
  region = "us-west-2"
}

provider "aws" {
  alias = "oregon"
  region = "us-west-2"
}

provider "aws" {
  alias = "virginia"
  region = "us-east-1"
}

# Oregon
resource "aws_route" "elb_peering_oregon" {
  provider = "aws.oregon"
*****
}

# Virginia
resource "aws_route" "elb_peering_virginia" {
  provider = "aws.virginia"
 *****
}

Not ideal definitely.

kung-foo commented 8 years ago

I worked up an example with multiple regions in a pure (no hacks) tf app: https://github.com/kung-foo/multiregion-terraform. Everything seems to work with all 11 regions and a single state file.

freimer commented 8 years ago

FYI, if I recall correctly, the issue I had with multi-region and S3 is that it takes a while for a bucket with a particular name to be deleted completely. So if you had a "bucket1" in us-east-1 and needed to "move" it to another region, you can't just delete it and then immediately re-create it in the other region. I don't believe there are any issues with Terraform and multi-regions. It is more of an AWS issue with timing.

kung-foo commented 8 years ago

Ah, yeah, I've had issues likes that in the past (pre-tf) with buckets. In my case though, I wanted to simply deploy a stack to multiple regions, and hadn't been able to find a working example. Then I got distracted by the idea of deploying in every possible az with a single tf apply.

hugomatinho commented 5 years ago

this is quite old as a post, but looking for some guidance, is there any way to reference another region's data in terraform to apply in another region. Use case is AWS Cloudfront which expects ACM to be present in us-east-1 while i need to apply in eu-west-1 ?

rimiti commented 5 years ago

@hugoduncan I've the same use case and the same problem.

kuritonasu commented 5 years ago

@hugomatinho @rimiti See here for more information: Create a new provider with an alias, just for Cloudfront/ACM:

provider "aws" {
  alias  = "useast1"
  region = "us-east-1"
}

and in your ACM resource use the provider meta-argument:

resource "aws_acm_certificate" "cert" {
  provider = aws.useast1
  ...
}

or for modules:

module "acm" {
  source = "terraform-aws-modules/acm/aws"
  providers = {
    aws = "aws.useast1"
  }
  ...
}
ghost commented 5 years ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.