hashicorp / terraform-provider-helm

Terraform Helm provider
https://www.terraform.io/docs/providers/helm/
Mozilla Public License 2.0
997 stars 368 forks source link

import helm release does not import values #1086

Open mac2000 opened 1 year ago

mac2000 commented 1 year ago

Noticed really strange behaviour

My intent was to import existing release into terraform and I did expected that terraform plan will say there is nothing to change. But does not matter what I have tried it does always says that values changed and it will perform in place change, which is little bit scary

After trying to figure out whats going on, it seems that after import, values are saved to attributes.metadata.values, but actual attributes.values stays null - that's why no matter what we will do terraform will think he need to reapply changes.

If, we pass the same values that used in existing release, indeed nothing changes, but revision is incremented, and the diff between revisions shows nothing

Prerequisites:

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm plugin install https://github.com/databus23/helm-diff

diff - proof of concept, we are going to deploy redis, then perform small change and deploy it once again and with help of diff plugin will see what have been changed

cat <<EOT > values.yml
architecture: standalone
auth:
  enabled: false
master:
  persistence:
    enabled: false
  nodeSelector:
    kubernetes.io/os: linux
commonLabels:
  app: demo
EOT

helm upgrade --install demo bitnami/redis --namespace=demo --create-namespace --values=values.yml

cat <<EOT > values.yml
architecture: standalone
auth:
  enabled: false
master:
  persistence:
    enabled: false
  nodeSelector:
    kubernetes.io/os: linux
commonLabels:
  app: demo
  hello: world # <- ADDED
EOT

helm upgrade --install demo bitnami/redis --namespace=demo --create-namespace --values=values.yml

helm -n demo diff revision demo 1 2
image

So diff plugin works and shows us what we expect

Now if I will try to import this release to following terraform:

terraform {
  required_providers {
    helm = {
      source  = "hashicorp/helm"
      version = "~> 2.0.1"
    }
  }
}

provider "helm" {
  kubernetes {
    config_path = "/Users/mac/Documents/dotfiles/kube/dev.yml"
  }
}

/*
terraform import helm_release.demo demo/demo
*/
resource "helm_release" "demo" {
  name       = "demo"
  namespace = "demo"
  repository = "https://charts.bitnami.com/bitnami"
  chart      = "redis"
  version    = "17.8.6"
  values = [
    "${file("values.yml")}"
  ]
}

terraform will complaing that values changed and will reapply them

helm_release.demo: Refreshing state... [id=demo]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
  ~ update in-place

Terraform will perform the following actions:

  # helm_release.demo will be updated in-place
  ~ resource "helm_release" "demo" {
        id                         = "demo"
        name                       = "demo"
      + repository                 = "https://charts.bitnami.com/bitnami"
      + values                     = [
          + <<-EOT
                architecture: standalone
                auth:
                  enabled: false
                master:
                  persistence:
                    enabled: false
                  nodeSelector:
                    kubernetes.io/os: linux
                commonLabels:
                  app: demo
                  hello: world
            EOT,
        ]
        # (25 unchanged attributes hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

and after this apply nothing actually changed (diff between revisions shows nothing) and deployment did not restarted, but new revision indeed was created

investigating deeper and comparing how the state was changed is see

image

notes: i have removed some unchanged values for everything to fit into the screen so we can see everything at once

which makes me believe that if we definitely sure we have exactly the same values it should be safe to perform apply after import

but it is strange why import did not import the values, it is technically possible and may help future newcomers like me

Links:


workaround:

resource "helm_release" "front-internal" {
  name       = "front-internal"
  namespace  = "internal"
  chart      = "ingress-nginx"
  version    = "4.2.5"
  repository = "https://kubernetes.github.io/ingress-nginx"
  values = [yamlencode({ # helm -n internal get values front-internal
    controller = {
      config = {
        "large-client-header-buffers" = "4 16k"
        "proxy-buffer-size"           = "16k"
      }
      ingressClass = "internal"
      ingressClassResource = {
        controllerValue = "k8s.io/internal"
        default         = false
        enabled         = true
        name            = "internal"
      }
      kind = "DaemonSet"
      publishService = {
        enabled = true
      }
      replicaCount = 1
      service = {
        externalTrafficPolicy = "Local"
      }
    }
  })]
}

and make sure the terraform plan shows similar manifest to helm -n internal get values front-internal so nothing should not be changed, after applying changes, if revision was increased check it with helm -n internal diff revision front-internal 19 20 to confirm that nothing did actually changed

BBBmau commented 1 year ago

Hello! Thank you for opening this issue, this is typically the workflow that you would do when using terraform import.

So needing to run terraform apply once more is required in order to get the values to be applied to the helm_release resource with the correct formatting.

vsuzdaltsev commented 1 year ago

so if I do understand right, the release will be recreated?

Jbhadviya commented 11 months ago

I am also facing similar issue, where terraform is trying to update the helm chart. When I compared the values there is no change.

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  ~ update in-place
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # module.iac-module-starburst.helm_release.external-secrets-release will be updated in-place
  # (imported from "default/externalsecrets")
  ~ resource "helm_release" "external-secrets-release" {
        atomic                     = false
        chart                      = "kubernetes-external-secrets"
        cleanup_on_fail            = false
        create_namespace           = false
        dependency_update          = false
        description                = "Upgrade complete"
        disable_crd_hooks          = false
        disable_openapi_validation = false
        disable_webhooks           = false
        force_update               = false
        id                         = "externalsecrets"
        lint                       = false
        max_history                = 0
        metadata                   = [
            {
                app_version = "8.5.0"
                chart       = "kubernetes-external-secrets"
                name        = "externalsecrets"
                namespace   = "default"
                revision    = 2
                values      = jsonencode(
                    {
                        affinity = {
                            nodeAffinity = {
                                requiredDuringSchedulingIgnoredDuringExecution = {
                                    nodeSelectorTerms = [
                                        {
                                            matchExpressions = [
                                                {
                                                    key      = "alpha.eksctl.io/nodegroup-name"
                                                    operator = "In"
                                                    values   = [
                                                        "external_secrets",
                                                    ]
                                                },
                                            ]
                                        },
                                    ]
                                }
                            }
                        }
                        env      = {
                            AWS_DEFAULT_REGION           = "us-west-2"
                            AWS_REGION                   = "us-west-2"
                            LOG_LEVEL                    = "info"
                            LOG_MESSAGE_KEY              = "msg"
                            POLLER_INTERVAL_MILLISECONDS = 60000
                        }
                    }
                )
                version     = "8.5.0"
            },
        ]
        name                       = "externalsecrets"
        namespace                  = "default"
        pass_credentials           = false
        recreate_pods              = false
        render_subchart_notes      = true
        replace                    = false
      + repository                 = "https://external-secrets.github.io/kubernetes-external-secrets"
        reset_values               = false
        reuse_values               = false
        skip_crds                  = false
        status                     = "deployed"
      ~ timeout                    = 300 -> 1800
      + values                     = [
          + <<-EOT
                affinity:
                  nodeAffinity:
                    requiredDuringSchedulingIgnoredDuringExecution:
                      nodeSelectorTerms:
                      - matchExpressions:
                        - key: alpha.eksctl.io/nodegroup-name
                          operator: In
                          values:
                          - external_secrets

                env:
                  AWS_REGION: us-west-2
                  AWS_DEFAULT_REGION: us-west-2
                  POLLER_INTERVAL_MILLISECONDS: 60000  # 86400000=1 Day, 3600000=1 hour, 60000 1 min
                  LOG_LEVEL: info
                  LOG_MESSAGE_KEY: "msg"
            EOT,
        ]
        verify                     = false
        version                    = "8.5.0"
        wait                       = true
        wait_for_jobs              = false
    }
PPACI commented 5 months ago

This also prevent any import block

import {
  id = "..."
  to = "..."
}

from generating the values with terraform plan -generate-config-out=... since the values are not yet known at import time.

johnsonjnpan commented 4 months ago

Hello! Thank you for opening this issue, this is typically the workflow that you would do when using terraform import.

So needing to run terraform apply once more is required in order to get the values to be applied to the helm_release resource with the correct formatting.

Hello! I'm also facing a similar issue here. We are trying to import some existing helm charts into terraform without affecting the running charts. While we try our best to make sure the values is the same as the ones that already got deployed. Not able to diff the changes before "apply" seem to be a major draw back hence makes this operation pretty risky.

I wonder why the current behaviour is the typical workflow. I think will be a huge plus if we can see a good diff before we actually apply after we import resource.

joey-squid commented 4 months ago

The Terraform provider invalidates the metadata whenever there's a change to the values or to a number of other fields, see https://github.com/hashicorp/terraform-provider-helm/blob/main/helm/resource_release.go#L830. Unfortunately, Terraform doesn't let a provider invalidate a nested field (I don't understand why, see https://github.com/hashicorp/terraform-plugin-sdk/issues/459).

I've made a proof-of-concept where instead of metadata.values we have a computed_values field (this doesn't touch any other field). Unfortunately I think this might be too backward-incompatible, so I'm not sure whether or not this can be merged in, but I've made it available here https://github.com/joey-squid/terraform-provider-helm. I'm happy to send a PR if this makes sense to the folks in charge.

joey-squid commented 4 months ago

Of course, a few hours after making that change I found out it was incompatible with some modules that we use, and that's not surprising at all. I'm not sure what the best way forward is.