hashicorp / terraform-provider-helm

Terraform Helm provider
https://www.terraform.io/docs/providers/helm/
Mozilla Public License 2.0
997 stars 367 forks source link

Unable to provision the same Helm chart into multiple GKE namespaces at one time #743

Closed ranjitk-burwood closed 1 year ago

ranjitk-burwood commented 3 years ago

Terraform, Provider, Kubernetes and Helm Versions

Terraform version: 0.13.5
Helm Provider version: 2.1.2
Kubernetes version: 1.13.3

Affected Resource(s)

Terraform Configuration Files Root Module

provider "helm" {
  kubernetes {
    host                   = data.google_container_cluster.my_cluster.endpoint
    token                  = data.google_client_config.default.access_token
    cluster_ca_certificate = base64decode(data.google_container_cluster.my_cluster.master_auth[0].cluster_ca_certificate)
  }
}

module "helm_deployment" {
  source = "../../../modules/platform/helm"

  release_name   = var.helm_release_name
  repository_url = var.repository_url
  helm_chart     = var.helm_chart
  helm_version   = var.helm_version
  timeout        = var.helm_timeout
  values         = [file(var.values)]
  namespaces     = data.terraform_remote_state.class_section.outputs.map_class_section_ip
}

Terraform Configuration Files Child Module

resource "helm_release" "jupyterhub" {
  for_each = var.namespaces

  name       = var.release_name
  repository = var.repository_url
  chart      = var.helm_chart
  version    = var.helm_version
  namespace  = each.key
  timeout    = var.timeout
  values     = var.values

  set {
    name  = "proxy.service.loadBalancerIP"
    value = each.value
  }

  set {
    name  = "proxy.secretToken"
    value = random_id.secret_token.id
  }

Steps to Reproduce

  1. Run through CloudBuild with a terraform init, terraform plan, terraform apply.

Expected Behavior

I am expecting to see the same JupyterHub helm chart provisioned into two GKE namespaces when my Cloud Build pipeline runs once.

Actual Behavior

I only see one namespace is provisioned properly. I need to re-run the Cloud Build pipeline in order for the second namespace to have Helm installed. My Cloud Build run does not fail or error out.

Important Factoids

I am using a for_each loop to read a map from my state file inside of the child module. This map contains a syntax of {IP Address : Namespace}.

Community Note

jrhouston commented 3 years ago

Hi @ranjitk-burwood thanks for opening an issue. I just tried to reproduce this without success using the following config:

locals {
  namespaces = {
    "1.2.3.4" = "blue"
    "4.5.6.7" = "green"
  }
}

resource helm_release test {
  for_each = local.namespaces

  namespace  = each.value
  name       = "redis"

  repository = "https://charts.bitnami.com/bitnami"
  chart      = "redis"
}

Output shows I get two releases in two namespaces:

$ terraform apply --auto-approve 
helm_release.test["4.5.6.7"]: Creating...
helm_release.test["1.2.3.4"]: Creating...
helm_release.test["4.5.6.7"]: Creation complete after 1m8s [id=redis]
helm_release.test["1.2.3.4"]: Creation complete after 1m6s [id=redis]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
$ helm ls --all-namespaces                                                                                             
NAME    NAMESPACE   REVISION    UPDATED                                 STATUS      CHART           APP VERSION
redis   blue        1           2021-05-05 01:36:33.636342 -0400 EDT    deployed    redis-14.1.1    6.2.3
redis   green       1           2021-05-05 01:36:30.694217 -0400 EDT    deployed    redis-14.1.1    6.2.3

Can you share a debug log with TF_LOG=DEBUG and HELM_DEBUG=1? There could be an error that is being swallowed.

Are there any resources in your chart that are not namespaced? I'm able to produce a failure with a chart that has a ClusterRole in it, for example.

ranjitk-burwood commented 3 years ago

Hi @jrhouston sorry for the delayed response. I figured out that this was due to values that I set in a configuration YAML being passed into the chart.

Specifically for the JupyterHub chart it was setting the two values below to false which helped fix my issue.

If anybody comes across this and does need to enable both of those fields my workaround was to just have two terraform apply steps run in succession for the Helm chart build step so that I could make sure all namespaces were setup properly.

scheduling:
  userScheduler:
    enabled: false
  podPriority:
    enabled: false

If you want me to run a debug log let me know, I was trying this out through Cloud Build and wasn't sure of how to get that configured but could try to just run this through Cloud Shell instead.

github-actions[bot] commented 2 years ago

Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!

github-actions[bot] commented 1 year ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.