hashicorp / terraform

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is a source-available tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.
https://www.terraform.io
Other
42.85k stars 9.56k forks source link

Ability to pass providers to modules in for_each #24476

Open mightyguava opened 4 years ago

mightyguava commented 4 years ago

Use-cases

I'd like to be able to provision the same set of resources in multiple regions a for_each on a module. However, looping over providers (which are tied to regions) is currently not supported.

We deploy most of our infra in 2 regions in an active-passive configuration. So being able to instantiate both regions using the same module block would be a huuuge win. It's also our primary use case for for_each on modules being implemented in https://github.com/hashicorp/terraform/issues/17519.

Proposal

Proposed syntax from @jakebiesinger-onduo

provider "google" {
  alias = "goog-us-east1"
  region = "us-east1"
}
provider "google" {
  alias = "goog-us-west1"
  region = "us-west1"
}
locals {
  regions = toset(['us-east1', 'us-west1'])
  providers = {
    us-east1 = google.goog-us-east1
    us-west1 = google.goog-us-west1
  }
}
module "vpc" {
 for_each = local.regions
  providers = {
    google = local.providers[each.key]
  }
  ...
}

Another option would be to de-couple the region from providers, and allow the region to be passed in to individual resources that are region aware. As far as I know, both AWS and GCP credentials at least are global.

References

DawtCom commented 3 years ago

Can you send an example of how you are using this feature ask with 0.15.1?

tiagomatic commented 3 years ago

Here's what it looks like:

locals {
  tags = toset([
    "foo",
    "bar"
  ])

  services = toset([
    "a",
    "b",
    "c",
    "d"
  ])
}

module "services_in_environment_x" {
  for_each     = local.services
  source       = "../../../path/to/some/module"
  env          = "x"
  service_name = "service-${each.value}"
  owner        = "team.alpha@example.com"
  tags         = local.tags
}

module "services_in_environment_y" {
  for_each     = local.services
  source       = "../../../path/to/some/module"
  providers    = { some_provider = some_provider.some_alias }
  env          = "y"
  service_name = "service-${each.value}"
  owner        = "team.beta@example.com"
  tags         = local.tags
}

module "services_in_environment_z" {
  for_each     = local.services
  source       = "../../../path/to/some/module"
  providers    = { some_provider = some_provider.some_alias }
  env          = "z"
  service_name = "service-${each.value}"
  owner        = "team.beta@example.com"
  tags         = local.tags
}

As I mentioned, my IDE (Intellij with Terraform plugin) does highlight for_each and providers saying that they clash, but I'm able to terraform plan just fine.

jw-maynard commented 3 years ago

This isn't really a solution to the OPs request. He seems to want something where you would only specify services_in_environment and then ge able to for_each over a list of provider keys or aliases or something and generate the same module in each environment.

That said with the pre 0.15 provider proxy block configuration I think @tiagomatic 's config was also impossible because terraform saw the provider block in the module and blew up. This seems to be progress.

tiagomatic commented 3 years ago

Yes, 0.15.1 resolved my need to have for_each and providers in the same module block, but I understand now how using the interpolated value of ${each.value}, for example, in the alias is more important for the OP's problem. In general, I think not being able to use pre-interpolated values as references is a big weakness in any language.

nikolay commented 3 years ago

@tiagomatic This works fine on v0.13.7 as well.

nadenf commented 3 years ago

@apparentlymart .. I don't understand the logic for this to be only prioritised until well after 1.0 is released.

Hashicorp has communicated that one of the requirements for 1.0 is to have a stable foundation/architecture. And one of the core aspects of this foundation is how you interact with providers. Given that this issue being fixed dramatically changes how people write their code (i.e. by not using Terragrunt or template generation) why wouldn't you want this fixed before 1.0 ?

nikolay commented 3 years ago

@apparentlymart Worst of all is that there are workarounds for people who are not using the commercial offering by HashiCorp (Terraform Cloud) as everybody else can use some preprocessor like Jinja2 to avoid creating piles of messy copypasta. It's very simple - we can create dynamically Kubernetes clusters, but we can't do anything with them dynamically as the Kubernetes providers are static just like any other provider. And this pretty much prevents any modern DevOps shop to use Terraform without hardcoding.

mofesola commented 3 years ago

Is there any update on this yet?

carcuevas commented 3 years ago

yes please we need this feature .... !!! my eyes are itchy just looking and all the code I need to copy and paste.... :-(

tomarv2 commented 3 years ago

Here's what it looks like:

locals {
  tags = toset([
    "foo",
    "bar"
  ])

  services = toset([
    "a",
    "b",
    "c",
    "d"
  ])
}

module "services_in_environment_x" {
  for_each     = local.services
  source       = "../../../path/to/some/module"
  env          = "x"
  service_name = "service-${each.value}"
  owner        = "team.alpha@example.com"
  tags         = local.tags
}

module "services_in_environment_y" {
  for_each     = local.services
  source       = "../../../path/to/some/module"
  providers    = { some_provider = some_provider.some_alias }
  env          = "y"
  service_name = "service-${each.value}"
  owner        = "team.beta@example.com"
  tags         = local.tags
}

module "services_in_environment_z" {
  for_each     = local.services
  source       = "../../../path/to/some/module"
  providers    = { some_provider = some_provider.some_alias }
  env          = "z"
  service_name = "service-${each.value}"
  owner        = "team.beta@example.com"
  tags         = local.tags
}

As I mentioned, my IDE (Intellij with Terraform plugin) does highlight for_each and providers saying that they clash, but I'm able to terraform plan just fine.

Yes looks like it's resolved in the new 1.0.1. IDE is reporting an issue but deployment works fine.

acidprime commented 3 years ago

I came across needing this as I am setting up a series of vault clusters and to configure each one I need to set the provider URL for each module loop iteration.

waxb commented 3 years ago

Creating for example Vault namespaces or Azure subscriptions and use them on the same run seems impossible at the moment because of this limitation.

mikegreen commented 3 years ago

@waxb

Creating for example Vault namespaces or Azure subscriptions and use them on the same run seems impossible at the moment because of this limitation.

For Vault namespaces, you can workaround (hacky) by using modules that will alias each. I have a demo here that might help workaround this until this issue is resolved: https://github.com/mikegreen/terraform-vault-namespace-demo

waxb commented 3 years ago

@waxb

Creating for example Vault namespaces or Azure subscriptions and use them on the same run seems impossible at the moment because of this limitation.

For Vault namespaces, you can workaround (hacky) by using modules that will alias each. I have a demo here that might help workaround this until this issue is resolved: https://github.com/mikegreen/terraform-vault-namespace-demo

Thank you for the demo, actually I 'solved' the Vault problem with a similar 'hacky' approach, so I agree the impossible part was a bit theatrical, but still with this issue's feature all the similar problems could be handled in a clean way where you are creating something requiring its own provider configuration.

With Azure now I needed to strip down the code and drop for_each and depends_on to create resources in the subscription on the same run.

vishwa-trulioo commented 3 years ago

Yo Hashicorp, you guys should stop doing all other work and make all your developers focus on this. People have pointed it out over 1.5 years ago. yet, you are still running around circles.

gtmtech commented 2 years ago

I am able to completely dynamically generate providers and associated modules using a terragrunt generate() with a contents=formatlist() in case it helps anyone wanting to do this outside of terraform.

Doing so allowed us the option of having feature oriented terraform statefiles across hundreds of AWS accounts, which makes for an interesting possible alternative approach to managing large cloud estates.

gtmtech commented 2 years ago

I have done a POC showing how to do this in terragrunt (but not terraform) - leaving here if it's of any use to anyone

https://github.com/gtmtechltd/terragrunt-poc

It opens the possibility of orgwide pan-account feature based terraforming, which has distinct advantages to keeping smaller statefiles, but orgwide features.

It could easily be adapted to OPs request to iterate over regions instead of accounts.

rayjanoka commented 2 years ago

I have done a POC showing how to do this in terragrunt (but not terraform) - leaving here if it's of any use to anyone

This is a good solution.

Am I correct to say that you need to know all of your AWS accounts in the terragrunt phase, and the limitation in that is you cannot create a new AWS account via terraform resource (generating a new AWS account ID) and then in that same run generate a provider and use that provider to create resources in the new AWS account?

karivera2 commented 2 years ago

This is what I am trying to do:

We run the harness from AWS Cloud Shell in the same account that runs Control Tower.

data "aws_organizations_organization" "current_org" {}

module "aws_s3_baseline" {
    for_each = { for account in data.aws_organizations_organization.current_org: account.id => id }
    source = "../../modules/aws-s3-baseline"
    provider "aws" {
            assume_role = {
            role_arn = "arn:aws:iam::${each.value.id}:role/AWSControlTowerExecution"
      }
    }   

}
cotz1995 commented 2 years ago

Any updates on this? I'm currently trying to do something very similar to OP with AWS.


provider` "aws" {
  region = var.region
}

provider "aws" {
  region = "us-west-1"
  alias  = "us-west-1"
}

locals {
  providers = {
    us-east-2 = aws,
    us-west-1 = aws.us-west-1
  }
}

module "apis" {
  source = "./modules/api"

  for_each = local.providers

  providers = {
    aws = each.value
  }

  region       = each.key
  environment  = var.environment
  project_name = var.project_name
  default_tags = local.tags

  domain          = var.domain
  certificate_arn = module.network.api_certificate_arn

  lambda_names             = var.lambda_names
  cloudwatch_log_group_arn = aws_cloudwatch_log_group.logs.arn
}
.
.
.
KevinLoganBS commented 2 years ago

This is easily the the biggest hardship with terraform of having to have providers known, and having no way to dynamically make them. I think the most flexible way would be to allow terraform to synthesize a template to static terraform.

For example, this is how CDKTF works (and terragrunt's generate). Having a built-in templating engine that can render the actual terraform to plan/apply would ensure compatibility with core assumptions in terraform without worrying much on compatibility of different provider declarations.

Terraform can already handle templating (templatefile), so would it be that much of a reach to have some meta-templatefile functionality (have terraform templatefile itself, before executing plan/apply)?

This would also make things like conditionals easier because instead of having to copy/paste some count logic (or create a new module with count logic), it can just be applied from the templating.

raffraffraff commented 2 years ago

I know it's pointless repeating what others have said, and I can only click so hard on the subscribe button. It broke my codebase pretty badly when this change exposed under-the-hood details that would be hidden from the language. I understand that the purpose of the change was to ensure that terraform providers exist throughout the whole lifecycle of the resources they create but .. could this not be relaxed during creation? Deletion is a different story of course! But with the latest versions of terraform I can no longer create EKS clusters and then use 'for_each' to deploy resources into each of them. Could we not have an option to override this behaviour on creation of new providers and their resources?

On Fri 18 Feb 2022, 03:30 Kevin Logan, @.***> wrote:

This is easily the the biggest hardship with terraform of having to have providers known, and having no way to dynamically make them. I think the most flexible way would be to allow terraform to synthesize a template to static terraform.

For example, this is how CDKTF works (and terragrunt's generate). Having a built-in templating engine that can render the actual terraform to plan/apply would ensure compatibility with core assumptions in terraform without worrying much on compatibility of different provider declarations.

Terraform can already handle templating (templatefile), so would it be that much of a reach to have some meta-templatefile functionality (have terraform templatefile itself, before executing plan/apply)?

This would also make things like conditionals easier because instead of having to copy/paste some count logic (or create a new module with count logic), it can just be applied from the templating.

— Reply to this email directly, view it on GitHub https://github.com/hashicorp/terraform/issues/24476#issuecomment-1043823728, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAZ2YBAZ6DZIK3HJDVD55WLU3W4NRANCNFSM4LUSPGIA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you are subscribed to this thread.Message ID: @.***>

markdjones82 commented 2 years ago

This is what I am trying to do:

  • Get all accounts from the organization
  • Loop through the accounts to configure a baseline in each account

We run the harness from AWS Cloud Shell in the same account that runs Control Tower.

data "aws_organizations_organization" "current_org" {}

module "aws_s3_baseline" {
    for_each = { for account in data.aws_organizations_organization.current_org: account.id => id }
    source = "../../modules/aws-s3-baseline"
    provider "aws" {
            assume_role = {
            role_arn = "arn:aws:iam::${each.value.id}:role/AWSControlTowerExecution"
      }
    }   

}

I am also trying to do the same thing with AFT in global config.

karivera2 commented 2 years ago

This is what I am trying to do:

  • Get all accounts from the organization
  • Loop through the accounts to configure a baseline in each account

We run the harness from AWS Cloud Shell in the same account that runs Control Tower.

data "aws_organizations_organization" "current_org" {}

module "aws_s3_baseline" {
    for_each = { for account in data.aws_organizations_organization.current_org: account.id => id }
    source = "../../modules/aws-s3-baseline"
    provider "aws" {
            assume_role = {
            role_arn = "arn:aws:iam::${each.value.id}:role/AWSControlTowerExecution"
      }
    }   

}

I am also trying to do the same thing with AFT in global config.

I am adding folder levels to help some. Thankfully we block all but two regions. I have to call the baseline once per account per region. I am not looking forward to how large this file will get. I am looking into AFT but it is not as polished as I had hoped. It is easier for me to create the account with Terraform, go to the console and register the account with control tower, and then run the baseline on the account. I can then manage state of org level resources and account level resources with Terraform.

main.tf

terraform {
    backend "s3"{
        bucket = "mybucket-remote-state"
        key = "terraform-account.tfstate"
        region = "us-east-2"
    }
}

module "aws_accounts" {
    source = "../../modules/aws-account-creation"
}

/* 
Call each account once per region. 
Due to limitation in Terraform we can't loop though list of accounts and regions.
Hoping for enhancement to Terraform or to use Terragrunt to help DRY out the code in future verisons.
Must call the baseline once per region per account. I.E. if two regions are allowed there will be two calls per account in
this file.
*/
module "aws_account_baseline_123456789_us_east_2" {
    source = "./modules/aws-account-baseline-standard"
    aws_region = "us-east-2"
    account_id = "123456789"
}

module "aws_account_baseline_123456789_us_east_1" {
    source = "./modules/aws-account-baseline-standard"
    aws_region = "us-east-1"
    account_id = "123456789"
}

module "aws_account_baseline_987654321_us_east_2" {
    source = "./modules/aws-account-baseline-standard"
    aws_region = "us-east-2"
    account_id = "987654321"
}
module "aws_account_baseline_987654321_us_east_1" {
    source = "./modules/aws-account-baseline-standard"
    aws_region = "us-east-1"
    account_id = "987654321"
}

./modules/aws-account-baseline-standard/main.tf

provider "aws" {
  region  = var.aws_region
  assume_role {
    role_arn = "arn:aws:iam::${var.account_id}:role/AWSControlTowerExecution"
  }
}

module "aws_vpc_baseline" {
    source = "../../../../modules/aws-vpc-baseline"
}
module "aws_ec2_baseline" {
    source = "../../../../modules/aws-ec2-baseline"
}
module "aws_s3_baseline" {
    source = "../../../../modules/aws-s3-baseline"
}
ArturasDB commented 2 years ago

For last few weeks I am writing a module that would create eks clusters, which could be flexibly configured via variables, so that mean using for_each cycle. Every resource can be created no problem, but if I want to add standard helm releases such as autoscaler, this issue blocks me, because providers live in their own rule set and cannot be dynamically created.

nikolay commented 2 years ago

I think there's no point to make any comments on these issues - obviously, HashiCorp has other priorities such as cdktf (to compete with Pulumi), and others unknown to us. It used to be a pleasure using Terraform, but now they've abandoned providers such as TLS, MySQL, etc., and do not care much about the core Terraform. I keep hearing that this is a big change in the core of Terraform and how it works, but it's not too hard to add a macro capability outside of the box and not require people to use Terragrun and cdktf, which, by the way, are not supported by their commercial offering of Terraform Cloud. I keep saying this, but they don't serve their own financial interest as competitors such as env0 support Terragrunt, and TFC doesn't. There's no hook to do some custom preprocessing before planning and applying Terraform plans with TFC. In all cases, preprocessors are easy and meaningful, so, even once dynamic providers become available, this feature should still be valuable and Terragrunt is an example of this. Adopting Terragrunt would be the easiest, but it looks like HashiCorp is suffering from the well-known Not-Invented-Here Syndrome! Thanks to the obsoleted core, IaC is now IaCP (Infrastructure-as-Copypasta). All my plans are driven by well-structured and validated variables... except things, which require the support for dynamic providers! I apologize for these speculations, but at least give us some transparency on the topic, HashiCorp!

crw commented 2 years ago

We appreciate our community's feedback. Contributions in the form of comments on issues, creation of new issues, and creation and comments on PRs are always welcome. These comments are being seen and the do contribute to the prioritization process.

Please try to keep comments in issues focused on furthering the understanding of the issue, ideally through use cases and work-arounds. Please be mindful of the Community Guidelines when posting. Thanks very much for being a part of the community!

brentonfairless commented 2 years ago

Similar requirement from me.

My issue is that I am trying to provision namespaces, and "sub-namespaces" based on a list(map) variable for Vault Enterprise, as per this document https://learn.hashicorp.com/tutorials/vault/namespaces.

It appears the only way to make a namespace under a namespace, is to provision a provider with the namespace parameter set. Problem is, I only know the namespace value at runtime inside a module.

I'll try the workaround I saw posted above, but my brain is fried from trying to figure out a way to map my data elegantly for hours.

FalcoSuessgott commented 2 years ago

Similar requirement from me.

My issue is that I am trying to provision namespaces, and "sub-namespaces" based on a list(map) variable for Vault Enterprise, as per this document https://learn.hashicorp.com/tutorials/vault/namespaces.

It appears the only way to make a namespace under a namespace, is to provision a provider with the namespace parameter set. Problem is, I only know the namespace value at runtime inside a module.

I'll try the workaround I saw posted above, but my brain is fried from trying to figure out a way to map my data elegantly for hours.

Thats exactly my use case and I ended up having a single statefile for each namespace and thus calling terraform apply multiple times. I then pass the provider for the root, parent and childnamespace to each module.

But note how there is currently a PR (https://github.com/hashicorp/terraform-provider-vault/pull/1305) open that adresses that issue by enabling to pass the namespace to each resource instead of in the provider configuration. Hope this helps u :)

YuriGal commented 2 years ago

Bump. Any progress?

jorhett commented 2 years ago

@apparentlymart you quite reasonably asked for "at least a year"... we're now past 18 months. Any timeline for addressing this?

adudek commented 2 years ago

If its any conciliation It's possible with a little sprinkle of terragrunt ontop, not actually ready solution but a concept code:

generate "binding_modules" {
  path      = "_binding_modules.tf"
  if_exists = "overwrite_terragrunt"
  contents  = <<EOF
    # generated from terragrunt.hcl
    %{for p in var.subacc_providers}
      module "binding_${p}" {
        source    = "./actual-code"
        providers = {
          awsnetwork = aws.awsn
          awssubacc = ${p}
        }
      }
    %{endfor}
  EOF
}
pryorda commented 2 years ago

The current code does tell you if you have resources that were created with a previous provider that is no longer there. It would be great to have the ability to do things with parameterized providers.

dthauvin commented 2 years ago

Hello , is there any chance to release this feature ? it can be very very usefull.

frugecn commented 2 years ago

Interesting use-case to add to the mix. With AWS, the dynamodb resource allows a list of regions to create their global table infrastructure. Applying the CMK KMS becomes problematic because it requires passing in the provider to create the KMSs to the different regions. Without a dynamic provider, there is an inconsistency between the two resources to keep them in sync.

DavidGamba commented 2 years ago

I have solved this problem by using for_each as an on-off enabled toggle.

My provider.tf file:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "3.74.0"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "2.7.1"
    }
    tls = {
      source  = "hashicorp/tls"
      version = "3.1.0"
    }
  }
}

provider "aws" {
  region  = var.defaults["aws_region"]
  profile = var.profile
}

The project layout uses the same main.tf with multiple accounts:

            ├── main.tf
            ├── backend.tf
            ├── provider.tf
            ├── variables.tf
            └── envs
                ├── us-dev-1
                    ├── variables.tfvars
                    └── backend.tfvars
                └── us-prod-1
                    ├── variables.tfvars
                    └── backend.tfvars

The backend at the root level is a placeholder:

terraform {
  backend "s3" {
  }
}

And the envs/us-dev-1/backend.tfvars looks like:

bucket  = "us-dev-1-terraform-state"
key     = "monorepo/projects/project-a/us-dev-1.tfstate"
region  = "us-east-1"
profile = "us-dev-1"
$ ./terraform init -reconfigure -backend-config=envs/us-dev-1/backend.tfvars
$ ./terraform plan -var-file=envs/us-dev-1/variables.tfvars

In regards to the module, if it doesn't require dynamic providers, you can use for_each directly as an on-off switch, but not as a collection of elements. You can do the collection of elements using a module of modules with the top level one being the on-off and the sub modules using for_each as normal.

module "my_module_1" {
  for_each = toset(var.account == "dev" ? ["enabled"] : [])
  ...

module "my_module_2" {
  for_each = toset(var.account == "dev" ? ["enabled"] : [])
  ...

For dynamic providers, like kubernetes, you need to introduce a variable to do your on-off switch. And use a module of modules:

modules/eks-cluster/
├── eks-cluster
│   ├── main.tf
│   └── variables.tf
├── eks-managed_node_group
│   ├── main.tf
│   ├── provider.tf
│   └── variables.tf
├── main.tf
├── provider.tf
└── variables.tf

The caller code will look like this:

module "eks-cluster-1" {
  enabled = toset(contains(["dev", "prod"], var.account) ? ["enabled"] : [])
  source  = "../modules/eks-cluster"
  ...
}

module "eks-cluster-2" {
  enabled = toset(var.account == "dev" ? ["enabled"] : [])
  source  = "../modules/eks-cluster"
  ...
}

The modules/eks-cluster/variables.tf has then on-off variable definition:

variable "enabled" {
  type = set(string)
}

The modules/eks-cluster/main.tf defines the dynamic provider and calls the sub-modules:

module "eks-cluster" {
  for_each = var.enabled
  source   = "./eks-cluster"
  ...
}

provider "kubernetes" {
  alias                  = "eks-cluster"
  host                   = length(var.enabled) > 0 ? data.aws_eks_cluster.eks-cluster["enabled"].endpoint : ""
  cluster_ca_certificate = length(var.enabled) > 0 ? base64decode(data.aws_eks_cluster.eks-cluster["enabled"].certificate_authority[0].data) : ""
  token                  = length(var.enabled) > 0 ? data.aws_eks_cluster_auth.eks-cluster["enabled"].token : ""
}

module "eks-managed_node_group" {
  for_each = var.enabled
  source   = "./eks-managed_node_group"

  providers = {
    kubernetes = kubernetes.eks-cluster
  }
  ...
}

This is what the state ends up looking like:

"provider": "module.eks-cluster-1.provider[\"registry.terraform.io/hashicorp/kubernetes\"].eks-cluster",

...

"provider": "module.eks-cluster-2.provider[\"registry.terraform.io/hashicorp/kubernetes\"].eks-cluster",

The challenge with this design, is that since terraform doesn't store provider information in the state, you have to destroy in 2 stages. First, mark the entry as disabled:

module "eks-cluster-1" {
  enabled = toset(contains(["dev", "prod"], var.account) ? ["enabled"] : [])
  source  = "../modules/eks-cluster"
  ...
}

module "eks-cluster-2" {
  enabled = toset(["destroy-me"])
  source  = "../modules/eks-cluster"
  ...
}

Then plan/apply that. Terraform will still be able to connect to the cluster. Finally remove the block.

frugecn commented 2 years ago

@DavidGamba, an interesting approach. The big hang-up I see is best practice is we're not supposed to put a provider block in the code the module source parameter actually calls, but if it works until they dynamic provider can be instituted, that looks like it may be a good approach, but definitely not a long term solution.

apparentlymart commented 2 years ago

FWIW that caveat at the end about needing to use two separate apply steps in order to remove the module is exactly why we deprecated putting provider blocks in nested modules, so if you're willing to accept that quirky workflow -- which can be okay as long as you don't expect to be removing "instances" often -- then I don't think there's any other significant downside to doing it that way.

The documentation recommends against it primarily because people were often getting themselves into that "trap" and not understanding how to do that two-step process to first destroy the objects and then remove the provider configuration they rely on.

vivanov-dp commented 2 years ago

@DavidGamba Yes, but you are still using only one provider for the whole group. The discussion is not only about dynamically creating providers, but also being able to set them into variables and pass them as parameters like other terraform objects, so that we can do a for_each with a list of providers and pass every instance of the module (or resource, why not?) a different one.

We work around this limitation by using a code generator and there were a few examples with terragrunt here in the thread doing the same thing.

And although we have built that part of our code base already and moved on, we would still very much like to see these features implemented - I personally would love to dig into it and tell everybody for the next month - don't touch me, I'm refactoring :)

nikolay commented 2 years ago

@vivanov-dp No need to use a code generator when you can use CDKTF for all the dynamic stuff not supported by Terraform proper yet - like glue code of HCL modules.

DavidGamba commented 2 years ago

@vivanov-dp I see the end goal as to be able to deploy a piece of infra across multiple accounts and regions using the same module block. The approach I showed is being used to deploy EKS clusters on 3 accounts across 4 regions. So I thought it could help someone who was stuck.

Granted, my approach is requiring a separate state file for each account/region combination so I can see how you could possibly want more than one region in the same statefile with a built-in. Also, because the provider connection information (or data lookups required to connect) are not persisted in the state my approach requires a two step destroy process which is what this feature would need to solve.

Finally, because providers escape module boundaries [1] I can't easily do blue/green to update provider versions which is awful when you really don't want to touch base level infra, like EKS, until the next version release. So the feature I came here looking for was how to pass provider versions to the module so I could have one block with the old provider versions and a new block with the new EKS version and new provider versions.

[1] https://github.com/hashicorp/terraform/issues/25343#issuecomment-649149976

prmarino1m commented 2 years ago

this seems related to but not the same as https://github.com/hashicorp/terraform/issues/19932

nirvana-msu commented 2 years ago

FWIW that caveat at the end about needing to use two separate apply steps in order to remove the module is exactly why we deprecated putting provider blocks in nested modules, so if you're willing to accept that quirky workflow -- which can be okay as long as you don't expect to be removing "instances" often -- then I don't think there's any other significant downside to doing it that way.

The documentation recommends against it primarily because people were often getting themselves into that "trap" and not understanding how to do that two-step process to first destroy the objects and then remove the provider configuration they rely on.

@apparentlymart I am personally comfortable with the two-step apply workflow, however this does not seem possible at all with the recent Terraform version? It just errors with Error: Module is incompatible with count, for_each, and depends_on. I assume there is no way to make it work in a recent release?

apparentlymart commented 2 years ago

Hi @nirvana-msu,

As the error message says, a module with its own provider blocks inside cannot be a multi-instance module using count or for_each. If you want to use that technique then you will need to call your modules without for_each or count. That isn't directly related to what this issue is representing, so if you'd like to discuss alternatives I suggest starting a topic in the community forum, where we can discuss it without making lots of noise for people who are subscribed to this issue. Thanks!

nirvana-msu commented 2 years ago

I see. Some of the discussion here made it seem like it was possible in the past to use module for_each and have providers defined inside modules, but was recommended against because of the two-step apply process gotcha. I figure this was never the case and I misunderstood. Thanks.

eyulf commented 2 years ago

Bumping this for visibility. I'd like to be able to do this so I can deploy config to multiple regions without needing to resort to messy child modules (which do not play well when the apply is being done by a smaller container) or worse, CloudFormation.

sbwise01 commented 2 years ago

Here is how I was thinking of it being used:

provider "aws" {
  region = "us-east-1"
  alias  = "use1"
}

provider "aws" {
  region = "us-east-2"
  alias  = "use2"
}

provider "aws" {
  region = "us-west-1"
  alias  = "usw1"
}

provider "aws" {
  region = "us-west-2"
  alias  = "usw2"
}

locals {
  buckets = {
    bucket1 = {
      main = aws.use1
      dr   = aws.use2
    }
    bucket2 = {
      main = aws.usw1
      dr   = aws.usw2
    }
  }
}

module "bucket" {
  for_each = local.buckets

  source = "./modules/example"
  name   = each.key

  providers = {
    aws.main = each.value.main
    aws.dr   = each.value.dr
  }
}

As far as I can tell, this doesn't violate any of the design constraints listed by @apparentlymart in this comment as all providers for all resource declarations are enumerated with known configurations early in the run cycle. Essentially all this is doing is allowing the values used in the providers argument to be an alias of the provider's type and alias attribute, which I think is in line with make provider configurations a special kind of value in the language

jiba21 commented 1 year ago

I have a workaround:

  1. during the account creation process, write the providers and the common module

    
    resource "local_file" "provider_update_common_resources" {
    filename        = "${path.module}/../organization-resources/provider.tf"
    file_permission = "666"
    content         = <<-EOT
    provider "aws" {
      alias   = "principal"
      profile = "principal"
      region  = var.region
      default_tags {
        tags = {
        }
      }
    }
    %{for profile in local.account_name_profile}
    provider "aws" {
      alias   = "${profile}"
      profile = "${profile}"
      default_tags {
        tags = {
    
        }
      }
    }
    %{endfor}
    EOT
    }

resource "local_file" "module_resources" { filename = "${path.module}/../organization-resources/basic-resources.tf" content = <<-EOT

%{for profile in local.account_name_profile}
module "basic_resources_${profile}" {
  source = "git::git@github.com:***/aws/organization-resources"

  providers = {
    aws        = aws.${profile}
    aws.shareds = aws.shareds
  }
}
%{endfor}

EOT }

resource "local_file" "aws_config" { filename = ".aws/config" content = <<-EOT [default] region = **

%{for key, value in local.account_ids}
[profile ${key}]
...
%{endfor}

EOT }

**Where** 

account_ids = zipmap( values(aws_organizations_account.this)..name, values(aws_organizations_account.this)..id, )



2. The file `basic-resources.tf `and `provider.tf `will create under de folder organization-resources and we will need to copy the file .aws in the local config and then do Terraform apply.

I hope that could help someone
hypervtechnics commented 1 year ago

@jiba21 I actually really like that approach. I think it is only usable in small scale and non automated scenarios. When the code is checked into git and rolled out automatically however it breaks or at least becomes a bit tedious and unintuitive to handle.

I like that approach because it handles changes more gracefully because the apply is always off by one and therefore prevents errors á la "provider config not present anymore - can not destroy resources". That requires the local_file resources to be dependent on everything else. Maybe we can refine that whole behaviour to also have a one-off-behaviour like prevent destroy.

apparentlymart commented 1 year ago

Thanks for sharing that workaround, @jiba21!

I think that is a pragmatic way to get something working but I do want to be clear that modules modifying their own source code using providers is not something we can promise to stay compatible with in future versions, because it's relying on a number of implementation details of exactly what order Terraform does some side-effects. In particular, there is an open request elsewhere for making modules be covered by the dependency lock file a similar way as for providers and it seems likely that an implementation of that would fail in some way if a module's source code changes after it was initially installed without that being explicitly allowed by the -upgrade option to terraform init.