hashicorp / terraform-provider-helm

Terraform Helm provider
https://www.terraform.io/docs/providers/helm/
Mozilla Public License 2.0
998 stars 369 forks source link

How do you use this provider in a non-legacy Terraform module? #1105

Open foxydevloper opened 1 year ago

foxydevloper commented 1 year ago

Question

I have a module that creates a cluster and installs a helm release onto using this provider. However, when I use the module with count or for_each, I get the following error:

The module at module.clusters is a legacy module which contains its own local provider configurations, and so calls to it may not use the count, for_each, or depends_on arguments.   
│ If you also control the module "./cluster" consider updating this module to instead expect provider configurations to be passed by its caller.

So my question is: How do I use this provider in a non-legacy module? Is it even possible? Is there any other way that I could use Terraform to install helm releases onto a cluster managed by Terraform inside of a non-legacy module?

Terraform configuration

./cluster/terraform.tf

# required provider stuff (cloud provider & helm provider)
var "region" {}
resource "digitalocean_kubernetes_cluster" "cluster" {
  name   = "my-cluster-${var.region}"
  region = var.region
  node_pool {
    name       = "default"
    size       = "s-1vcpu-2gb"
  }
}
provider "helm" {
  kubernetes {
    host  = digitalocean_kubernetes_cluster.servers-cluster.endpoint
    token = digitalocean_kubernetes_cluster.servers-cluster.kube_config[0].token
    cluster_ca_certificate = base64decode(
      digitalocean_kubernetes_cluster.servers-cluster.kube_config[0].cluster_ca_certificate
    )
  }
}
resource "helm_release" "example" {
  # insert helm chart to install
}

./terraform.tf

module "clusters" {
  source   = "./cluster"
  for_each = toset(["nyc3", "sgp1"])
  region   = each.key
}

Terraform version, Kubernetes provider version and Kubernetes version

Terraform version: v1.4.2
Helm Provider version: 2.27.1
Kubernetes version: any
EnergoStalin commented 1 year ago

You cant use for_each with module which declares provider block read this for explanation. Or to put it simply define all providers blocks on level above then that uses for_each or depends_on. Also it's not helm issue

foxydevloper commented 1 year ago

You cant use for_each with module which declares provider block read this for explanation. Or to put it simply define all providers blocks on level above then that uses for_each or depends_on. Also it's not helm issue

Thanks for the unhelpful info. I put it under the "question" label for a reason. If you read what I posted, you'll see that there is no way to define the provider block on a level above them in my use case. I'm not the only one who's tried doing this, even Google's example Terraform module that installs a helm package tries using the provider inside of a module: https://github.com/googleforgames/agones/blob/release-1.30.0/install/terraform/modules/helm3/helm.tf They should make connecting to a kubernetes cluster a resource instead of a provider, so that it can be used inside of a module. Otherwise, it's impossible to use this provider in any useful form in a non-legacy Terraform module.

alexsomesan commented 1 year ago

Hi,

I'm not 100% sure what kind of answer are you looking for. The module in question IS in fact deemed legacy, as the error message suggests.

Are you the maintainer of this module or is this a 3rd party module that you want to just consume?

There are different solutions available depending on which of these two situations you find yourself in.

foxydevloper commented 1 year ago

Hi,

I'm not 100% sure what kind of answer are you looking for. The module in question IS in fact deemed legacy, as the error message suggests.

Are you the maintainer of this module or is this a 3rd party module that you want to just consume?

There are different solutions available depending on which of these two situations you find yourself in.

I'm the maintainer of the module, yes. I need to spin up multiple clusters and load a specific helm package onto each of them, but it doesn't seem like there's a way to do this in a non-legacy module, since having helm provider connect to the cluster is a provider. I could possibly deal with it being a legacy module and copy/paste the "module" block with a different value for each region

alexsomesan commented 1 year ago

So the "legacy" designation actually refers to the module containing provider blocks, whereas the preferred "modern" approach is to pre-configure the provider block outside the module and then pass it in, similar to how you would any other attribute value. You can read in detail about this here: https://developer.hashicorp.com/terraform/language/modules/develop/providers.

This approach, combined with using provider aliases (explained here: https://developer.hashicorp.com/terraform/language/providers/configuration#alias-multiple-provider-configurations) should get you pretty close to what I think you are after.

Let me know if you have further questions and if that's getting you any closer to your goal.

madduci commented 1 year ago

Hi

I'm experiencing the following error in my project. I have a submodule called "grafana" and in this module i don't have a definition of providers, instead I use what Ive defined in the root module (Using the version 2.9.0 of the helm provider).

I get this error when i perform a terraform apply:

module.grafana[0].helm_release.grafana: Creating...
╷
│ Error: could not download chart: Chart.yaml file is missing
│
│   with module.grafana[0].helm_release.grafana,
│   on grafana/main.tf line 1, in resource "helm_release" "grafana":
│    1: resource "helm_release" "grafana" {

it seems like it can't download charts.

It seems that in the issue #1066 happens the same error

Here some trace logs:

2023-04-28T11:02:51.650+0200 [INFO]  provider.terraform-provider-helm_v2.9.0_x5: 2023/04/28 11:02:51 [DEBUG] [INFO] GetHelmConfiguration success: timestamp=2023-04-28T11:02:51.650+0200
2023-04-28T11:02:51.650+0200 [INFO]  provider.terraform-provider-helm_v2.9.0_x5: 2023/04/28 11:02:51 [DEBUG] [resourceReleaseCreate: grafana] Getting chart: timestamp=2023-04-28T11:02:51.650+0200
2023-04-28T11:02:51.674+0200 [TRACE] provider.terraform-provider-helm_v2.9.0_x5: Called downstream: @module=sdk.helper_schema tf_provider_addr=provider tf_resource_type=helm_release tf_rpc=ApplyResourceChange @caller=github.com/hashicorp/terraform-plugin-sdk/v2@v2.22.0/helper/schema/resource.go:838 tf_req_id=3ef4ad27-aa4e-017e-52c2-9834214fa434 timestamp=2023-04-28T11:02:51.674+0200
2023-04-28T11:02:51.674+0200 [TRACE] provider.terraform-provider-helm_v2.9.0_x5: Received downstream response: tf_req_id=3ef4ad27-aa4e-017e-52c2-9834214fa434 tf_resource_type=helm_release tf_rpc=ApplyResourceChange @module=sdk.proto diagnostic_error_count=1 tf_proto_version=5.3 tf_req_duration_ms=25 @caller=github.com/hashicorp/terraform-plugin-go@v0.14.0/tfprotov5/internal/tf5serverlogging/downstream_request.go:37 diagnostic_warning_count=0 tf_provider_addr=provider timestamp=2023-04-28T11:02:51.674+0200
2023-04-28T11:02:51.674+0200 [ERROR] provider.terraform-provider-helm_v2.9.0_x5: Response contains error diagnostic: diagnostic_detail= tf_provider_addr=provider @caller=github.com/hashicorp/terraform-plugin-go@v0.14.0/tfprotov5/internal/diag/diagnostics.go:55 @module=sdk.proto tf_resource_type=helm_release tf_rpc=ApplyResourceChange diagnostic_severity=ERROR diagnostic_summary="could not download chart: Chart.yaml file is missing" tf_proto_version=5.3 tf_req_id=3ef4ad27-aa4e-017e-52c2-9834214fa434 timestamp=2023-04-28T11:02:51.674+0200
2023-04-28T11:02:51.674+0200 [TRACE] provider.terraform-provider-helm_v2.9.0_x5: Served request: tf_req_id=3ef4ad27-aa4e-017e-52c2-9834214fa434 tf_resource_type=helm_release tf_rpc=ApplyResourceChange @module=sdk.proto tf_proto_version=5.3 tf_provider_addr=provider @caller=github.com/hashicorp/terraform-plugin-go@v0.14.0/tfprotov5/tf5server/server.go:831 timestamp=2023-04-28T11:02:51.674+0200
2023-04-28T11:02:51.675+0200 [TRACE] maybeTainted: helm_release.grafana[0] encountered an error during creation, so it is now marked as tainted
hassan-cevo commented 1 year ago

So the "legacy" designation actually refers to the module containing provider blocks, whereas the preferred "modern" approach is to pre-configure the provider block outside the module and then pass it in, similar to how you would any other attribute value. You can read in detail about this here: https://developer.hashicorp.com/terraform/language/modules/develop/providers.

This approach, combined with using provider aliases (explained here: https://developer.hashicorp.com/terraform/language/providers/configuration#alias-multiple-provider-configurations) should get you pretty close to what I think you are after.

Let me know if you have further questions and if that's getting you any closer to your goal.

All of this makes sense but the problem here is that it is not possible to configure provider outside module because the provider configuration requires values that is generated by resources within the module itself. like digitalocean_kubernetes_cluster.servers-cluster.endpoint can only be known once module defines resources, cannot be known before.

provider "helm" {
  kubernetes {
    host  = digitalocean_kubernetes_cluster.servers-cluster.endpoint
    token = digitalocean_kubernetes_cluster.servers-cluster.kube_config[0].token
    cluster_ca_certificate = base64decode(
      digitalocean_kubernetes_cluster.servers-cluster.kube_config[0].cluster_ca_certificate
    )
  }
}
seungmlee commented 1 year ago

I generally agree with the direction but it creates a massive problem in a case when you use something like terraform-aws-eks-blueprints with for_each. It has many sub-modules and now I have to provide aliases for helm, kubectl and kubernetes provider and add custom logic inside of the sub-module. It soon becomes unmanageable. I very much would like an ability to either dynamically define provider in the root module instead of hardcoding it upfront or allow for_each with provider defined in the sub-module.

cdenneen commented 7 months ago

@apparentlymart ... What is the solution for issues like @seungmlee described? I've had this issue where I've had to supply providers in children modules.

Example of this is I have a custom module that creates "batteries included eks cluster"

Issue 1

eks_cluster/
  /modules
    /eks <-- [1]
    /gitlab
    /flux
    /bootstrap

[1] Here I need to apply kubernetes manifests to the provisioned cluster. This would use either the kubernetes provider or kubectl provider. The provider configuration is based on the eks cluster creation. This module of eks contains the community eks module as well as the eks_blueprints_addons. The addons require these CRDs and manifests in place in order to create NodePools for the addons to run.

Only solution I can think of to avoid this provider block being necessary in the child module would be to break the addons into another module (eks-addons) at which point the provider config in root module could hopefully work.

Issue 2 Also there are cases where I don't want to run flux or bootstrap from the root module so I've passed vars of enable_bootstrap and enable_flux (default to boolean of true) which is why I use count on those modules in the root module.

Technically each of these child modules could run on their own which was why I was thinking of putting the providers into them and removing from the root module all together (allow all child modules to handle their respective provider blocks and root module would technically have none) but ran into this issue with can't use count due to child module containing provider block. Without the count in the root I'm unaware of any other way to either include those modules in root run or not.

apparentlymart commented 7 months ago

I think there's a few different things going on here that are making this situation confusing, and in particular Terraform is in a sense complaining about the wrong problem here, making matters more confusing still.

The topmost problem here is that Terraform cannot support a provider configuration whose presence is decided dynamically based on an expression. That's because provider configurations always need to outlive the resource instances that belong to them by at least one plan/apply round. If the configuration in the opening issue had worked then removing an item from the for_each set would cause a blocking problem: Terraform now needs to destroy all of the resource instances in that instance of the module, but it can't do so because the provider configuration used to do it was removed at the same time, and the provider must be configured in order to destroy the objects.

As a consequence of that constraint, Terraform expects provider configurations to be declared only in the root module, since the root module is always a singleton and so there's never any situation where the problem above can arise, unless you literally delete the provider block. This secondary constraint is what the error message is talking about, but in this case it's a distraction from the root problem, which is that it just isn't possible to have a dynamically-declared provider configuration, whether inside a child module or not.

With all of that said then, with today's Terraform is just isn't possible to achieve the desired goal. The closest you can get is to statically configure each Kubernetes cluster with separate blocks, not using for_each.

There are two variations of that:

I'm sorry I can't just give you a single solution to the original problem as stated. If you want to get this done with today's Terraform, you will need to choose from one of the two compromises I described above, possibly with some code generation for the root module if you need to be able to add new Kubernetes clusters a lot.

One final variation is to dispense with the idea of all of the clusters being in the same configuration altogether, and instead use a separate state for each cluster. For example, you could potentially use multiple workspaces where each one is named after a Kubernetes cluster, and then use terraform.workspace as the cluster name. You could then make a new cluster by creating a new workspace. That's essentially the same as the first option above but using the workspace mechanism to handle the multiple instances, and thus the provider configuration would be in the root module.

cdenneen commented 7 months ago

Thanks for the detailed explanation and reply. As an aside I’m actually doing something similar to the “different workspaces” but I’m using different TF_STATE_NAME because Gitlab doesn’t support workspaces but it’s been working good here.

My only concern now with breakout to another child module is the STATE. Moving those resources would cause them to all be deleted and then recreated.

Is there a way to move those dynamically?

cdenneen commented 7 months ago

Found the moved blocks.