hashicorp / terraform-provider-helm

Terraform Helm provider
https://www.terraform.io/docs/providers/helm/
Mozilla Public License 2.0
1k stars 372 forks source link

Discussion: Helm v3 & namespace automatic creation #399

Closed pierresteiner closed 4 years ago

pierresteiner commented 4 years ago

First of all, this is not really an bug, we are seeking guidance regarding a big difference between v2 & v3: the removal of the automatic namespace creation: https://github.com/linkerd/linkerd2/issues/3211

We (like certainly several others) are currently deploying our microservices independently with TF (based on the name of the branch). I don't see how this can be achieved now because:

Is version 1.0.0 honoring the removal of automatic namespace creation (we haven't tested it yet, and found nothing explicit about it). Should it be the case; how can we mitigate the previously mentioned issue?

robinkb commented 4 years ago

I don't know exactly what your deployment process looks like, but maybe you can use kubectl to create the namespace (if it does not exist) before running Terraform?

pierresteiner commented 4 years ago

Thanks for the proposal, while I cannot say that it will not work we will lose a ton of advantage of Terraform:

legal90 commented 4 years ago

Hi @pierresteiner. I just want to add my 2 cents. I'm sorry but I don't think that helm provider should manage the namespace for a release. Previosly, the automatic namespace creation happened just because Helm 2 did that. This feature has been removed from Helm 3, so the helm3-compatible provider should not do that neither.

IMO, the best choice for that is using kubernetes_namespace resource from kubernetes provider. I use it very widely in the big scale in my organization and never had any issues with it. You can set implicit or explicit dependency between kubernetes_namespace and helm_release to guarantee that the namespace is created first:

resource "kubernetes_namespace" "superset" {
  metadata {
    name = "superset"

    labels = {
      # ...
    }
  }
}

resource "helm_release" "superset" {
  namespace = kubernetes_namespace.superset.metadata.0.name  # the dependency on `kubernetes_namespace.superset`
  name      = "superset"

  repository = "https://kubernetes-charts.storage.googleapis.com"
  chart      = "superset"
  version    = "1.1.7"

  depends_on = [kubernetes_namespace.superset]  # another way of setting dependency on  `kubernetes_namespace.superset`
}

as each micro service will try to create it (and removal would be even worse...)

Sorry, I might not fully understand your use case, but that doesn't look as proper behavior. Usually, the service should not try to manage the namespace where it's running. It should be treated as runtime / infrastructure configuration and managed separately (for example, with kubernetes_namespace as I shown above).

P.s. Anyway, thank you for raising this question. 👍
The above is just my personal opinion. Let's see what maintainers and other community members will say.

pierresteiner commented 4 years ago

@legal90 Thank you for your proposal; this work well in a monolithe. We do have independent pipeline for different microservices (frontend, backend, ...) that need to end up in the same namespace (the name of the git branch).

One part of the solution would be define one microservice as more important as the other (i.e. backend). But this will have unwanted side effects:

mrkwtz commented 4 years ago

Although I understand the sentiment from @legal90 that the service should not manage its own namespace, the reality is far different as @pierresteiner mentioned with his use-case. Sometimes one uses namespaces for pure organizational purposes and IMO its ok then, if the service creates and manages its own namespace.

But I agree, this is not the right place (repo/project) for a discussion or a fix, because that was a functionality that helm2 provided, not the terraform provider itself. It should be discussed in the helm repository.

jrhouston commented 4 years ago

Thanks for opening this discussion @pierresteiner.

Is version 1.0.0 honoring the removal of automatic namespace creation (we haven't tested it yet, and found nothing explicit about it).

Yes, the provider calls out the same package used by the Helm cli so you can expect the same behaviour.

Should it be the case; how can we mitigate the previously mentioned issue?

For the moment the answer to this is to use the terraform kubernetes provider, or kubectl to create the namespace prior to install as @legal90 and @robinkb have suggested.

There currently isn't a way to use Helm to create a release that manages its own namespace. However, it seems there is work in progress to towards adding this. You can see this discussion for more details: https://github.com/helm/helm/issues/6794

pierresteiner commented 4 years ago

@jrhouston for the precise answer. I will track progress on that issue then