hashicorp / terraform

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is a source-available tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.
https://www.terraform.io/
Other
42.7k stars 9.55k forks source link

Instantiating Multiple Providers with a loop #19932

Open JakeNeyer opened 5 years ago

JakeNeyer commented 5 years ago

Current Terraform Version

Terraform v0.11.11

Use-cases

In my current situation, I am using the AWS provider so I will scope this feature request to that specific provider, although this may extend to other providers as well.

I am attempting to create resources in multiple AWS accounts. The number of the accounts ranges from 0....x and will be dynamic. I would like to be able to instantiate multiple providers which can assume a role in each account, and in turn, create resources with the associated provider without hard-coding providers for each subsequent account.

For example, something like this:


variable "accounts" {
  type    = "list"

  default = ["123456789012", "210987654321"]
}
variable "names" {
  type = "map"

  default = {
    "123456789012" = "foo"
    "210987654321" = "bar"
  }
}
provider "aws" {
  count   = "${length(var.accounts)}"
  alias   = "${lookup(var.names, element(var.accounts, count.index))}"

  assume_role {
    role_arn = "arn:aws:iam::${element(var.accounts, count.index)}:role/ASSUMEDROLE"
  }

resource "aws_instance" "web" {
  count         = "${length(var.accounts)}"
  provider      = "aws.${lookup(var.names, element(var.accounts, count.index))}"
  ami           = "${data.aws_ami.ubuntu.id}"
  instance_type = "t2.micro"

  tags = {
    Name = "HelloWorld"
  }
}
fitzoh commented 2 years ago

FWIW this issue was a major motivating factor for me to switch to Pulumi, which allows you to spin up an arbitrary set of providers

eredi93 commented 2 years ago

hitting this issue as well 😞

benvbr commented 2 years ago

Hitting this limitation as well. Any news on this?

ecout commented 2 years ago

Is there a technical reason why this hasn't been fixed in over two years? For our use case all we're asking is for terraform to support a functionality similar to Cloud Formation Stack Sets without having to recur to that resource, why is that so difficult?

bryankaraffa commented 2 years ago

Reminder all, if we want to increase priority of this issue please help and click the πŸ‘ on the initial post: https://github.com/hashicorp/terraform/issues/19932#issue-396653063

sc250024 commented 2 years ago

Is there a technical reason why this hasn't been fixed in over two years? For our use case all we're asking is for terraform to support a functionality similar to Cloud Formation Stack Sets without having to recur to that resource, why is that so difficult?

I remember seeing (and don't quote me on this; just trying to remember that one brief moment a while ago) that it has to do with how the Terraform model works under the hood. They (the developers) make an assumption that any provider block must be loaded first before any of the HCL language constructs (like for_each, count, etc.) are processed and rendered. I think that's sort of why it's taken this long. To do this issue is definitely not trivial, otherwise it would have been solved a while ago.

ecout commented 2 years ago

Is there a technical reason why this hasn't been fixed in over two years? For our use case all we're asking is for terraform to support a functionality similar to Cloud Formation Stack Sets without having to recur to that resource, why is that so difficult?

I remember seeing (and don't quote me on this; just trying to remember that one brief moment a while ago) that it has to do with how the Terraform model works under the hood. They (the developers) make an assumption that any provider block must be loaded first before any of the HCL language constructs (like for_each, count, etc.) are processed and rendered. I think that's sort of why it's taken this long. To do this issue is definitely not trivial, otherwise it would have been solved a while ago.

It makes sense considering the Stateful nature of Terraform. Thanks for the reminder.

For the record somebody put an example of a for_each loop used to switch providers in this thread, it might be hidden so you have to expand all the comments: The username is bryankaraffa. He used sts:assumeRole to switch accounts by iterating through account ids and assuming roles on each one.

ecout commented 2 years ago

Reminder all, if we want to increase priority of this issue please help and click the πŸ‘ on the initial post: #19932 (comment)

Done! And thank you for your code example!

gtmtech commented 2 years ago

Cross-posting from #31069 as requested

I work on an enormous hybrid cloud platform, - it consists of 3 clouds, hundreds of accounts/projects, multiple environments, regions and so on... Some simple terraform installations would not have a need for a solution in this space, but if you are a B2B provider, or an internal service provider to a very large company (e.g. 50,000 employees+), you are likely to hit the difficulty with how to structure all of your terraform in a way that scales well.

Terraform best practice dictates that a single terraform run (and thus state file) should manage a moderate, but not excessive number of resources - perhaps 100 is a reasonable amount of terraform resources per state file. At some point as you scale out, you have to make decisions on how to split resources across multiple terraform state files.

In a large multi-account AWS setup for example, you might make a reasonable decision to start splitting state across account boundaries, especially if a lot of AWS accounts are very similar and have similar resources set up (cloudtrail, networks, configs, iam etc.)

This also seems reasonable because in order to terraform across 2 accounts in AWS, you need 2 providers. A provider is intimately tied to a set of credentials and a context with which to access the AWS API, and those set of credentials or that context stipulates the target AWS account you are terraforming in.

Providers support part-parameterisation, so you can inject things like the role_arn, or this or that credential and you have parameterised code that can be run against each of your AWS accounts, and this results in a statefile per account. This is the typical way I have seen companies use terraform through my client engagements.

As that scales up though, you get further problems. Instead of 2 accounts, suppose you are terraforming 1000. That means 1000 terraform runs, and that means you need an automated orchestrator to keep them all in sync, as its too big a task for a human - step in things like terragrunt, or some enterprise offerings to help out.

However whilst those sort the running of lots of terraform runs, another problem starts to creep in. Developers tend to work on features, not on accounts, and terraform itself may not run completely as desired across all 1000 accounts due to engineer fault, network timeouts, aws api rejections, race conditions. Some AWS resources tend to take a very long time to instantiate and change - such as AWS Microsoft Active Directory at around 50 minutes. During such a change, all 1000 accounts are potentially "locked out" from other feature development. In a large team of engineers with a lot of features to change, waiting for hours for terraform runs to finish is not great.

So you might think to split out certain products from other products and create separate runs for them so they dont lock out the entire state for others whilst updating. Now instead of 1000 statefiles, you have 2000, and then 4000 and so on.

For each split, you may well need to introduce dependencies in your orchestrator to make sure that X happens before Y - e.g. the IAM permissions are set up before the resources that need them - or the networks are set up before the loadbalancers.

Pretty soon, the orchestrator dependencies is also creating a huge workflow that takes hours to resolve as well - So if my CI system has to run 8000 terraform runs, and there is an orchestrator dependency chain, even with some very well provisioned CI servers, I may be waiting hours for a run. In a large team and estate, this starts to cripple your productivity.

I was thinking about how best to solve some of these issues, when I realised that the main reason why the design choices had led down this path, was because there was no real way within terraform to intelligently and dynamically operate across a large number of AWS accounts. And the reason for that is because there is no support for dynamic providers.

Whilst providers can reference attributes, the number of providers is always fixed in the terraform configuration, and this means that if you want to operate on 1000 accounts in a single terraform run, you will need 1000 provider blocks. Or if you want to operate on each account using 2 IAM roles, you will need 2000 provider blocks.

Managing these provider blocks could be done by simply hardcoding them to each aws-account-id or each role-id - and maintaining a lot of these in your repository. For example, you could have a provider.account1.tf for each account, in there specify 2 provider blocks with hardcoded values for each of your roles you want to use in the respective account.

But these providers need passing through to the modules in the correct way, so you also need a module block that is going to call a module with a different set of providers for each account as well. And as with every module, you need to pass through all the variables needed too.

This works, its just a lot of files and blocks to maintain. It does however allow a different model at operating at scale, which is to now get terraform to do work across all accounts in a single terraform run, and that can be aligned to a specific feature. For example you could have a "cloudtrail" module which sets up cloudtrail in every account (in the days before the cloudtrail AWS Organizations feature as an example). You could imagine that you want these 1000 cloudtrails stored in a bucket too in a different account, so your single terraform run/statefile contains the code to create the cloudtrail bucket in some audit account with associated policies, and then create all the cloudtrails across all accounts, feeding them into the bucket, all as one feature.

This would seem pretty neat! Now with statefiles split along feature lines, because developers tend to naturally work on features and not on accounts, this aligns the developer iteration with the code iteration. Different features can be worked on in isolation (just as different developers work on them in isolation).

Onboarding new accounts and offboarding old accounts is a little trickier, as every feature needs running to onboard/offboard them in the respective account, but use of -target since each account's resources is a "module", is easy, so that works too.

It's a bit ugly though to maintain 1000s of provider blocks and associated module blocks, and the interactions between them (the dependencies that may be required between modules, like in the case above between the module doing the account which contains the cloudtrail bucket, and the module doing the accounts which contain the cloudtrail).

It would be a very nice feature to be able to iterate through a list of accounts, and generate provider blocks and associated module blocks , which means provider{} and module{} should both be made to support for_each and all the supporting ecosystem. Currently only module{} has been made to support it.

I managed to get such a system up and running using terragrunt doing the dynamic generation of provider blocks and module blocks - I put a POC here if anyone is interested: https://github.com/gtmtechltd/terragrunt-poc - it works and I was surprised at how much I loved being able to terraform a multi-account feature with different aspects in different accounts in one terraform run.

However I turn up my nose a little at such metaprogramming - programs to write programs - it's all just a bit... hard to read. I'd much rather this was supported directly in terraform. Also terragrunt doesn't really have support for datasources itself, and the generation phase has to come before any terraform runs happen, so you can't do something even cooler, which would be to query the aws_organizations object as a datasource, get all the accounts, and then just create all the dynamic providers from that, and apply your stuff everywhere.

Appreciate 99% of your userbase are probably just terraforming a small infra in dev/stage/prod setup - and we do have all sorts of terraform enterprisey things going on, but I thought it would be good to write up as a general feature request the reasons why I think dynamic providers would really help out at scale and offer some genuinely great alternatives to organising workloads in ways which are small and isolated and aligned with developer workflow.

If the terraform team have any other ideas and best practices about how to divide resources along statefile boundaries which lend themselves well to operating a large cloud estate with a large team at scale, I'd be really interested to hear from experience in the field.

Attempted Solutions See https://github.com/gtmtechltd/terragrunt-poc for a workaround using terragrunt

Proposal Allow provider blocks to support foreach, and be able to be dynamically made according to other datastructures (which could be just vanilla variables, rather than derived from datasources, but the icing on the cake would be if they could be made from the output of datasources too)

MrDrMcCoy commented 2 years ago

My use case for this would be for deploying a non-fixed number of Kubernetes clusters that you then need to deploy common manifests into (security, monitoring, ingress, etc). Each cluster created by Terraform would require its own K8S or Helm provider definition, which is extraordinarily unruly and verbose in vanilla Terraform.

mcandre commented 2 years ago

I've had to wrap the ungainly HCL code inside of Jinja2 as a workaround. And explain the need for Jinja2 to several mystified developers.

Seems that a mistake in parsing logic has uniquely broken the for loop here, compared to most other programming languages. Maybe just deprecate that old for syntax and create a new token with a more useful loading priority.

On Thu, May 19, 2022, 3:44 PM Jeremy McCoy @.***> wrote:

My use case for this would be for deploying a non-fixed number of Kubernetes clusters that you then need to deploy common manifests into (security, monitoring, ingress, etc). Each cluster created by Terraform would require its own K8S or Helm provider definition, which is extraordinarily unruly and verbose in vanilla Terraform.

β€” Reply to this email directly, view it on GitHub https://github.com/hashicorp/terraform/issues/19932#issuecomment-1132192134, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAABJRH6XXM4XIOCJPCDPPLVK2R3XANCNFSM4GORKD5A . You are receiving this because you commented.Message ID: @.***>

nikolay commented 2 years ago

I gave up on this! Honestly, CDK for Terraform seems to be the only (kind of) clean and HashiCorp-backed solution right now excluding Terraform Cloud support - I can generate dynamic providers and even name resources dynamically as a plus. You can have the CDKTF code as simply glue code of the HCL code organized in modules. In addition, you can break your monolithic HCL codebase into stack within the same workspace. The only drawback is that you can't really put pieces of your code in traditional Go modules and reuse them across projects.

thunder-spb commented 2 years ago

Oh, yeah. Suffering with the same issue. In my case, I have ECR and Replication set up. But AWS ECR Replication does not sync Repository permissions nor Lifecycle policy from the source repository to the replicated ones. And I have to precreate repository with all required permissions and lifecycles in the target regions/accounts. That also allow me to cleanup the replicated repostiories if the source repository has been deleted.... Replication regions' list could be dynamic...

Right now, I have to make a lot of kludges to address this... :(

So, I would love to have dynamically defined providers! +1 for this change!!

nimblenitin commented 2 years ago

is there any workaround available to use variable in provider something like below. If yes please let me know how to do it. Currently terraform is not allowing to use variable in provider name for module. I want to create role in each member account user provides; that is my goal here. User will provide the alias which I will use to create role.

providers = {
    aws = "${lookup(var.names, element(var.accounts, count.index))}"
  }
tmccombs commented 2 years ago

As far as i know the only workaround is to use another tool to dynamically generate terraform config.

jcollado commented 2 years ago

I'm using doit and jinja2 templates to generate terraform files. Just in case that is useful to anybody, I've uploaded the code to this gist.

Volatus commented 2 years ago

I'd really like to see this feature implemented and would be willing to try and work on it with some guidance if someone from the community could give some pointers.

devopsinfoltd commented 2 years ago

I know there must be lots of new feature request, but can someone from the members/contributors please update us, so we can start investing time/money to write our own dirty wrappers. Please really in need of this feature. Thanks.

prmarino1m commented 2 years ago

is there any movement on this it was buried so deep in the list that i couldn't find it and created a new feature request for the same thing and even included an example where i had the for_each loop iterating on the names provided by


data "aws_regions" "all" {
  all_regions = true
  filter {
    name = "opt-in-status"
    values = ["opted-in" , "opt-in-not-required"]
  }
}

provider "aws" {
  for_each = toset(data.aws_regions.all.names)
  alias = each.key
  region = each.key
ejtbrown commented 2 years ago

Bumping for visibility. This issue is a significant flaw in Terraform.

raelix commented 2 years ago

Is there any workaround for this? Thanks!

air3ijai commented 2 years ago

It is a long post with a lot of workarounds but all of them are about external tools/services or different approaches

  1. Code organization - same as use Terraform modules
  2. esyscmd - template engine
  3. Terragrunt - wrapper for Terraform
  4. Jinja2 - template engine
  5. CDKTF - Cloud Development Kit for Terraform
  6. env0 - IaC workflows
  7. Dynamic provider configuration - official statement
  8. gomplate - template engine
  9. atmos - workflow automation tool
  10. terramate - IaC collaboration, visibility and observability platform
  11. OpenTofu - fork of Terraform that is open-source, community-driven, and managed by the Linux Foundation
And a question But I have another question about how to handle dynamically resources for different provider using template engine or this request when it will be implemented. ```terraform provider "aws" { region = "eu-west-1" alias = "eu-west-1" } provider "aws" { region = "eu-west-2" alias = "eu-west-2" } resource "aws_acm_certificate" "primary" { domain_name = "domain.com" validation_method = "DNS" provider = aws.eu-west-2 } resource "aws_acm_certificate" "secondary" { domain_name = "domain.com" validation_method = "DNS" provider = aws.eu-west-1 } ``` This static code works well but just for creating new resources. If we remove code for `eu-west-2`, apply will fail with the error about missing provider: > >β•· β”‚ Error: Provider configuration not present β”‚ β”‚ To work with aws_acm_certificate.primary its original provider configuration at provider["registry.terraform.io/hashicorp/aws"].eu-west-2 is required, but it has been removed. This occurs when a β”‚ provider configuration is removed while objects created by that provider still exist in the state. Re-add the provider configuration to destroy aws_acm_certificate.primary, after which you can remove β”‚ the provider configuration again. β•· So, using a kind of dynamic regions (when it will be implemented) will fail if provider will be removed before resource will be destroyed if we change the list of regions: ```terraform locals { regions = ["eu-west-1", "eu-west-2"] } provider "aws" { for_each = local.regions alias = each.value } resource "aws_acm_certificate" "regions" { domain_name = "domain.com" validation_method = "DNS" for_each = local.regions provider = "aws.${each.value}" } ``` Maybe @Nuru's [proposal](#issuecomment-877438031) with addition attribute `enabled` may solve this.
hypervtechnics commented 2 years ago

Maybe this can be handled similar with the prevent_destroy which has to be first set to false, be applied and then can actually be removed. I'd rather not save especially sensitive credentials for creation of resources in the state.

The mentioned enabled flag could control that mechanism. If it is set to false the resources associated with that provider will be treated as non-existent in terraform code and therefore are about to be deleted.

prmarino1m commented 1 year ago

Obviously there has been no movement 3 year old issue which I suspect has related older issue if you dug into all the closed issues. I'm wondering if some one from Hashicorp can give us a reason why this is hard thing to implement and maybe point out what would need to change in the code so we can contribute the required fixes to get this working. This issue has become a chain of here is how I've worked around it and people repeating the different ways they would like it to work but no one seems to be pointing out the underlying reason why it doesn't work that way already.

prmarino1m commented 1 year ago

by the way I'm assuming the root of the issue is lack of key information about providers in the state storage.

Waldonutz commented 1 year ago

+1 for this would be useful, Im trying to deploy resources to 30+ accounts atm, without blowing out my code, a simple looping mechanism like for_each for providers would be appreciated.

kemalizing commented 1 year ago

Any updates on this issue?

I also want to link another issue, which is directly related: https://github.com/hashicorp/terraform/issues/24476

@teamterraform should take a look at these issues as they are one of the top πŸ‘ issues and impacting a lot of people.

ebarrere commented 1 year ago

Yes please. It is very difficult to follow AWS best-practices for Organizations (i.e. create an account for everything) without being able to loop over providers.

kehindeakala commented 1 year ago

Any updates on this implementation please?

pexa-akansal commented 1 year ago

Any Updates on this?

crw commented 1 year ago

Thank you for your comments. We are aware of this issue. There are no updates to report at this time.

As a reminder, please avoid "+1" and "any updates" comments, and to use the upvote mechanism (click or add the πŸ‘ emoji to the original post) to indicate your support for this issue. Thanks again for the feedback!

vierkean commented 1 year ago

Hello, I finally took the time to update from terraform version 0.11 to the latest version. Now I have to realize that this is not so easy because in version 0.11 we pass provider alias as variable.

With which version will this be possible again?

freakinhippie commented 1 year ago

Hello, I finally took the time to update from terraform version 0.11 to the latest version. Now I have to realize that this is not so easy because in version 0.11 we pass provider alias as variable.

With which version will this be possible again?

@vierkean this should probably be opened as question on the hashicorp developer forums. However, providers within modules can be defined using the configuration_aliases directive in the required_providers block. See the docs here.

There is a notable exception to the legacy passing providers by alias which is that now all providers defined in the configuration_aliases must be explicitly passed into the module calls rather than inherited implicitly.

deepbrook commented 1 year ago

For anyone looking for a workaround, there's really only one that made sense to me:

Multiple invocations of the same plan with different inputs for each provider.

I run my configuration in a CI/CD job matrix, each with different provider credentials/configs for the terraform command's environment.

IMO, completely feasible for most use-cases. And I think in cases where it isn't, there's probably a code smell somewhere which could be taken care of with a little refactoring.

kahawai-sre commented 1 year ago

@deepbrook I'm likely missing something here ... but does that mean you are maintaining a single deployment (plan/apply/state file) per distinct provider instance in your environment? If that is the case, and if there are approvals etc for each plan, does that not become a burden as the number of pipeline runs (alias configurations) scales? Also, does that approach not make it harder to orchestrate resource dependencies across multiple target deployment scopes i.e. through a single deployment?

Mike-Nahmias commented 1 year ago

@kahawai-sre I don't use a job matrix like @deepbrook mentioned but I basically took the same approach. Yes, for me I have a separate deployment/state per provider instance (different regions). We haven't had any issues so far as we scale. For handling resource dependencies I use Terragrunt. I use it for other stuff too but it makes it really easy to handle the inter-deployment dependency piece.

deepbrook commented 1 year ago

@deepbrook I'm likely missing something here ... but does that mean you are maintaining a single deployment (plan/apply/state file) per distinct provider instance in your environment? If that is the case, and if there are approvals etc for each plan, does that not become a burden as the number of pipeline runs (alias configurations) scales? Also, does that approach not make it harder to orchestrate resource dependencies across multiple target deployment scopes i.e. through a single deployment?

@kahawai-sre, that's exactly what that means, yes :)

My deployments do not run at a big scale, to be fair. As for approvals, I do not have those - I deploy infrastructure to preprod and run tests to ensure all features required are reachable/configured correctly, and then deploy them.

I do have some dependencies between the deployments; since my infrastructure isn't managed in a single repository, but several ones (basic platform infra like networking and IAM roles in one, k8s cluster and database in another one each, etc) It's only matter of triggering down-stream pipelines (I like pipelines :P ).

@Mike-Nahmias approach using terragrunt sounds like a good alternative if you're a mono repo kinda person. :)

dbhaigh commented 1 year ago

Same issue - New day... I have a load of terraform deploying resources to several environments, each requiring their own provider alias

I want to dedupe this lovely collection of n00dles to a single resource block with a for_each loop that iterates over a local or var that provides the required provider alias for each resource

I don't want to have to use modules for this as I feel it's an ugly way of doing this and will make things far harder and more complicated than the how I'm doing it now (separating each resource block, and any associated resources, out into it's own file), which makes it very easy to deploy new resources of the same type (ctrl-C, Ctrl-V - change the names of the guilty, and et viola! there you are)

What I want is to be able to have something like this

resource "resource_type" "resource_name" { for_each = local/var.resource_values provider = each.provider ... }

Without being told that "each" is an invalid provider reference

This is a major hole, and an old hole - one that should've been backfilled in years ago

Can we please get some action on this? and save me from having to go down the ugly bunny hole of modules?

TIA

duxing commented 1 year ago

coming back to this issue after a couple years and still not supported :(

nikolay commented 1 year ago

I guess, we have no other option but to move this issue to https://github.com/opentofu/opentofu - maybe they can implement this must-have feature now that they have a larger staff dedicated to Terraform!

nikolay commented 1 year ago

The strategy is intentionally limiting Terraform and solving these limitations by design only in TFC and patenting them!

Poltergeisen commented 11 months ago

700 upvotes on this makes me think that it's something we should at least get some sort've update / explanation on why this isn't possible.

A ton of the Terraform configurations just "work" with this type of syntax, why not providers as well?

nikolay commented 11 months ago

@Poltergeisen They want you to use their upcoming Terraform Cloud Stacks feature, so they have no incentive to fix this for all of us!

nitoxys commented 11 months ago

700 upvotes on this makes me think that it's something we should at least get some sort've update / explanation on why this isn't possible.

A ton of the Terraform configurations just "work" with this type of syntax, why not providers as well?

There's another request like this, but the developer stated that providers are loaded on init and it would have to be rearchitected. Technically provider_override could be an option within a provider.

osterman commented 11 months ago

Fwiw, we've solved this problem using atmos with Terraform and regularly deploy Terraform root modules (what we call components) to dozens or more of accounts in every single AWS region (e.g. for compliance baselines), without HCL code generation. IMO, even if this requested feature existed in terraform, we would not use it because it tightly couples the terraform state to multiple regions, which not only breaks the design principle for DR (that regions share nothing), but makes the blast radius of any change massive and encourages terralythic root module design. In our design pattern, we instantiate a root module once per region, ensuring that each instantiation is decoupled from the other. The one exception to this is when we set up things like transit gateways (with hubs and spokes), then we declare two providers, so we can configure source and target destinations. This ensures no two gateways are tightly coupled to each other.

TL;DR: I acknowledge why at first glance not supporting multiple providers in a loop seems like an awful limitation and why some things would be simplified if it were supported; that said, we deploy enormous infrastructures at Cloud Posse in strict enterprise environments, and don't feel any impact of this inherent limitation.

yves-vogl commented 11 months ago

I could not agree more to @osterman

I'm following this thread since a long time and I also came here because I thought that I needed this feature. Meanwhile I think that relying on a feature like this could possibly be a sign of a bad design. As @osterman said - just think of the enourmous blast radius.

One could argue that there are certain cases where you e.g. want to dynamically create provider instances e.g. in an AWS organization to assume the OrganizationAccountAccessRole for bootstrapping things. But once you think about managing the whole lifecycle of hundreds of those accounts you'll realize that this does not scale. Instead I now use AFT with AWS Control Tower to customize those accounts.

And maybe a few words on all the hate against Hashicorp. I also think that their pricing model is insane and from the times I had contact with the sales team (and even afterwards with their legal team) I can tell you that they somehow overestimate what they bring to the table as there are valuable alternatives nowadays. But please also keep in mind that there are lot of companies which make thousands of dollars each day without giving anything back to the community. Everyone who demands features should ask himself how much he's giving back in form of contributions or donations before starting rude demands and spreading hate.

Yes, Hashicorp is bad at communication. And in my opinion they somehow now try to squeeze a lot of money out of their customers. But please do not forget what their impact on our community had been. I remember the old days back in around 2010 when @mitchellh created Vagrant. It was the first time we could manage those VM in a way which did not suck.

Having said all the those things… I understand the demand and I feel that the lack of communication from Hashicorp contributed to this situation.

They better could have pointed out the reasons this feature is not available and demonstrate how to solve this problem by proper design.

Love matters, Yves

SamuelMolling commented 10 months ago

The forum is quite large, do we have any way to iterate the providers of a resource? Has anything like this been made available yet?

air3ijai commented 10 months ago

One of the comments with a recap.

dbhaigh commented 10 months ago

This post - https://github.com/hashicorp/terraform/issues/19932#issuecomment-1815164297 only three posts back in the thread, and on this page, answers your question, had you actually bothered to read it

afrazkhan commented 8 months ago

There's another use case which I think hasn't been mentioned: Optional resource creation within a module.

I have a module which mostly provisions AWS resources, but there's a single Github resource that is created conditionally (if a particular key is present in one of the module's variable objects). It's likely that many of the people using the module aren't using Github, or don't need this particular feature / resource, so I don't want to add it to the required providers block.